May 14 09:25:55.940164 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 03:42:56 -00 2025 May 14 09:25:55.940209 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 09:25:55.940228 kernel: BIOS-provided physical RAM map: May 14 09:25:55.940246 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 09:25:55.940259 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 09:25:55.940272 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 09:25:55.940289 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 14 09:25:55.940305 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 14 09:25:55.940319 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 09:25:55.940333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 09:25:55.940348 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 14 09:25:55.940361 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 09:25:55.940379 kernel: NX (Execute Disable) protection: active May 14 09:25:55.940394 kernel: APIC: Static calls initialized May 14 09:25:55.940411 kernel: SMBIOS 3.0.0 present. May 14 09:25:55.940426 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 14 09:25:55.940440 kernel: DMI: Memory slots populated: 1/1 May 14 09:25:55.940457 kernel: Hypervisor detected: KVM May 14 09:25:55.940470 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 09:25:55.940484 kernel: kvm-clock: using sched offset of 4813599221 cycles May 14 09:25:55.940499 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 09:25:55.940515 kernel: tsc: Detected 1996.249 MHz processor May 14 09:25:55.940530 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 09:25:55.940546 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 09:25:55.940561 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 14 09:25:55.940577 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 09:25:55.940596 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 09:25:55.940612 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 14 09:25:55.940626 kernel: ACPI: Early table checksum verification disabled May 14 09:25:55.940642 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 14 09:25:55.940658 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 09:25:55.940675 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 09:25:55.940689 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 09:25:55.940704 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 14 09:25:55.940718 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 09:25:55.940736 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 09:25:55.940750 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 14 09:25:55.940797 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 14 09:25:55.940816 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 14 09:25:55.940831 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 14 09:25:55.940852 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 14 09:25:55.940867 kernel: No NUMA configuration found May 14 09:25:55.940885 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 14 09:25:55.940903 kernel: NODE_DATA(0) allocated [mem 0x13fff8dc0-0x13fffffff] May 14 09:25:55.940917 kernel: Zone ranges: May 14 09:25:55.940932 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 09:25:55.940946 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 14 09:25:55.940961 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 14 09:25:55.940976 kernel: Device empty May 14 09:25:55.940991 kernel: Movable zone start for each node May 14 09:25:55.941009 kernel: Early memory node ranges May 14 09:25:55.941024 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 09:25:55.941039 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 14 09:25:55.941053 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 14 09:25:55.941066 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 14 09:25:55.941081 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 09:25:55.941095 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 09:25:55.941110 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 14 09:25:55.941125 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 09:25:55.941143 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 09:25:55.941157 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 09:25:55.941171 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 09:25:55.941185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 09:25:55.941200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 09:25:55.941215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 09:25:55.941230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 09:25:55.941262 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 09:25:55.941276 kernel: CPU topo: Max. logical packages: 2 May 14 09:25:55.941293 kernel: CPU topo: Max. logical dies: 2 May 14 09:25:55.941307 kernel: CPU topo: Max. dies per package: 1 May 14 09:25:55.941321 kernel: CPU topo: Max. threads per core: 1 May 14 09:25:55.941336 kernel: CPU topo: Num. cores per package: 1 May 14 09:25:55.941350 kernel: CPU topo: Num. threads per package: 1 May 14 09:25:55.941365 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 09:25:55.941379 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 09:25:55.941392 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 14 09:25:55.941406 kernel: Booting paravirtualized kernel on KVM May 14 09:25:55.941423 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 09:25:55.941437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 09:25:55.941451 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 09:25:55.941466 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 09:25:55.941480 kernel: pcpu-alloc: [0] 0 1 May 14 09:25:55.941493 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 09:25:55.941510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 09:25:55.941523 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 09:25:55.941541 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 09:25:55.941556 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 09:25:55.941571 kernel: Fallback order for Node 0: 0 May 14 09:25:55.941586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 14 09:25:55.941601 kernel: Policy zone: Normal May 14 09:25:55.941615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 09:25:55.941630 kernel: software IO TLB: area num 2. May 14 09:25:55.941645 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 09:25:55.941660 kernel: ftrace: allocating 40065 entries in 157 pages May 14 09:25:55.941677 kernel: ftrace: allocated 157 pages with 5 groups May 14 09:25:55.941691 kernel: Dynamic Preempt: voluntary May 14 09:25:55.941706 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 09:25:55.941722 kernel: rcu: RCU event tracing is enabled. May 14 09:25:55.941738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 09:25:55.941752 kernel: Trampoline variant of Tasks RCU enabled. May 14 09:25:55.945600 kernel: Rude variant of Tasks RCU enabled. May 14 09:25:55.945633 kernel: Tracing variant of Tasks RCU enabled. May 14 09:25:55.945648 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 09:25:55.945661 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 09:25:55.945682 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 09:25:55.945696 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 09:25:55.945711 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 09:25:55.945725 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 09:25:55.945740 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 09:25:55.945754 kernel: Console: colour VGA+ 80x25 May 14 09:25:55.945804 kernel: printk: legacy console [tty0] enabled May 14 09:25:55.945821 kernel: printk: legacy console [ttyS0] enabled May 14 09:25:55.945836 kernel: ACPI: Core revision 20240827 May 14 09:25:55.945854 kernel: APIC: Switch to symmetric I/O mode setup May 14 09:25:55.945867 kernel: x2apic enabled May 14 09:25:55.945882 kernel: APIC: Switched APIC routing to: physical x2apic May 14 09:25:55.945896 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 09:25:55.945911 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 09:25:55.945935 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 14 09:25:55.945952 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 09:25:55.945967 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 09:25:55.945982 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 09:25:55.945997 kernel: Spectre V2 : Mitigation: Retpolines May 14 09:25:55.946012 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 09:25:55.946031 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 09:25:55.946047 kernel: Speculative Store Bypass: Vulnerable May 14 09:25:55.946062 kernel: x86/fpu: x87 FPU will use FXSAVE May 14 09:25:55.946076 kernel: Freeing SMP alternatives memory: 32K May 14 09:25:55.946091 kernel: pid_max: default: 32768 minimum: 301 May 14 09:25:55.946107 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 09:25:55.946121 kernel: landlock: Up and running. May 14 09:25:55.946133 kernel: SELinux: Initializing. May 14 09:25:55.946148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 09:25:55.946162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 09:25:55.946176 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 14 09:25:55.946189 kernel: Performance Events: AMD PMU driver. May 14 09:25:55.946205 kernel: ... version: 0 May 14 09:25:55.946219 kernel: ... bit width: 48 May 14 09:25:55.946238 kernel: ... generic registers: 4 May 14 09:25:55.946254 kernel: ... value mask: 0000ffffffffffff May 14 09:25:55.946269 kernel: ... max period: 00007fffffffffff May 14 09:25:55.946285 kernel: ... fixed-purpose events: 0 May 14 09:25:55.946299 kernel: ... event mask: 000000000000000f May 14 09:25:55.946314 kernel: signal: max sigframe size: 1440 May 14 09:25:55.946329 kernel: rcu: Hierarchical SRCU implementation. May 14 09:25:55.946343 kernel: rcu: Max phase no-delay instances is 400. May 14 09:25:55.946358 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 09:25:55.946373 kernel: smp: Bringing up secondary CPUs ... May 14 09:25:55.946391 kernel: smpboot: x86: Booting SMP configuration: May 14 09:25:55.946406 kernel: .... node #0, CPUs: #1 May 14 09:25:55.946419 kernel: smp: Brought up 1 node, 2 CPUs May 14 09:25:55.946433 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 14 09:25:55.946448 kernel: Memory: 3962040K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227284K reserved, 0K cma-reserved) May 14 09:25:55.946463 kernel: devtmpfs: initialized May 14 09:25:55.946478 kernel: x86/mm: Memory block size: 128MB May 14 09:25:55.946494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 09:25:55.946509 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 09:25:55.946527 kernel: pinctrl core: initialized pinctrl subsystem May 14 09:25:55.946541 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 09:25:55.946554 kernel: audit: initializing netlink subsys (disabled) May 14 09:25:55.946569 kernel: audit: type=2000 audit(1747214752.509:1): state=initialized audit_enabled=0 res=1 May 14 09:25:55.946587 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 09:25:55.946604 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 09:25:55.946619 kernel: cpuidle: using governor menu May 14 09:25:55.946633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 09:25:55.946647 kernel: dca service started, version 1.12.1 May 14 09:25:55.946665 kernel: PCI: Using configuration type 1 for base access May 14 09:25:55.946681 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 09:25:55.946698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 09:25:55.946713 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 09:25:55.946728 kernel: ACPI: Added _OSI(Module Device) May 14 09:25:55.946741 kernel: ACPI: Added _OSI(Processor Device) May 14 09:25:55.946756 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 09:25:55.946800 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 09:25:55.946816 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 09:25:55.946835 kernel: ACPI: Interpreter enabled May 14 09:25:55.946849 kernel: ACPI: PM: (supports S0 S3 S5) May 14 09:25:55.946863 kernel: ACPI: Using IOAPIC for interrupt routing May 14 09:25:55.946878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 09:25:55.946892 kernel: PCI: Using E820 reservations for host bridge windows May 14 09:25:55.946907 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 14 09:25:55.946922 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 09:25:55.947154 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 14 09:25:55.947301 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 14 09:25:55.947444 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 14 09:25:55.947468 kernel: acpiphp: Slot [3] registered May 14 09:25:55.947482 kernel: acpiphp: Slot [4] registered May 14 09:25:55.947498 kernel: acpiphp: Slot [5] registered May 14 09:25:55.947512 kernel: acpiphp: Slot [6] registered May 14 09:25:55.947527 kernel: acpiphp: Slot [7] registered May 14 09:25:55.947541 kernel: acpiphp: Slot [8] registered May 14 09:25:55.947560 kernel: acpiphp: Slot [9] registered May 14 09:25:55.947574 kernel: acpiphp: Slot [10] registered May 14 09:25:55.947589 kernel: acpiphp: Slot [11] registered May 14 09:25:55.947604 kernel: acpiphp: Slot [12] registered May 14 09:25:55.947619 kernel: acpiphp: Slot [13] registered May 14 09:25:55.947633 kernel: acpiphp: Slot [14] registered May 14 09:25:55.947647 kernel: acpiphp: Slot [15] registered May 14 09:25:55.947662 kernel: acpiphp: Slot [16] registered May 14 09:25:55.947676 kernel: acpiphp: Slot [17] registered May 14 09:25:55.947695 kernel: acpiphp: Slot [18] registered May 14 09:25:55.947709 kernel: acpiphp: Slot [19] registered May 14 09:25:55.947724 kernel: acpiphp: Slot [20] registered May 14 09:25:55.947737 kernel: acpiphp: Slot [21] registered May 14 09:25:55.947752 kernel: acpiphp: Slot [22] registered May 14 09:25:55.947831 kernel: acpiphp: Slot [23] registered May 14 09:25:55.947847 kernel: acpiphp: Slot [24] registered May 14 09:25:55.947863 kernel: acpiphp: Slot [25] registered May 14 09:25:55.947877 kernel: acpiphp: Slot [26] registered May 14 09:25:55.947893 kernel: acpiphp: Slot [27] registered May 14 09:25:55.947913 kernel: acpiphp: Slot [28] registered May 14 09:25:55.947927 kernel: acpiphp: Slot [29] registered May 14 09:25:55.947941 kernel: acpiphp: Slot [30] registered May 14 09:25:55.947956 kernel: acpiphp: Slot [31] registered May 14 09:25:55.947971 kernel: PCI host bridge to bus 0000:00 May 14 09:25:55.949862 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 09:25:55.949985 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 09:25:55.950106 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 09:25:55.950233 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 09:25:55.950348 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 14 09:25:55.950461 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 09:25:55.950627 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 14 09:25:55.950822 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 14 09:25:55.950988 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 14 09:25:55.951135 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] May 14 09:25:55.951283 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 14 09:25:55.951428 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 14 09:25:55.951561 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 14 09:25:55.951708 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 14 09:25:55.952005 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 14 09:25:55.952262 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 14 09:25:55.952535 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 14 09:25:55.952706 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 14 09:25:55.952906 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 14 09:25:55.953057 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] May 14 09:25:55.953201 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] May 14 09:25:55.953374 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] May 14 09:25:55.953524 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 09:25:55.953650 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 09:25:55.953874 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] May 14 09:25:55.954016 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] May 14 09:25:55.954157 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] May 14 09:25:55.954297 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] May 14 09:25:55.954459 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 09:25:55.954598 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] May 14 09:25:55.954709 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] May 14 09:25:55.954880 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] May 14 09:25:55.954999 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 14 09:25:55.955098 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] May 14 09:25:55.955197 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] May 14 09:25:55.955304 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 09:25:55.955410 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] May 14 09:25:55.955508 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] May 14 09:25:55.955604 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] May 14 09:25:55.955622 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 09:25:55.955638 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 09:25:55.955653 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 09:25:55.955668 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 09:25:55.955682 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 14 09:25:55.955703 kernel: iommu: Default domain type: Translated May 14 09:25:55.955715 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 09:25:55.955726 kernel: PCI: Using ACPI for IRQ routing May 14 09:25:55.955736 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 09:25:55.955748 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 09:25:55.960787 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 14 09:25:55.960925 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 14 09:25:55.961017 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 14 09:25:55.961105 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 09:25:55.961124 kernel: vgaarb: loaded May 14 09:25:55.961134 kernel: clocksource: Switched to clocksource kvm-clock May 14 09:25:55.961144 kernel: VFS: Disk quotas dquot_6.6.0 May 14 09:25:55.961153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 09:25:55.961163 kernel: pnp: PnP ACPI init May 14 09:25:55.961283 kernel: pnp 00:03: [dma 2] May 14 09:25:55.961299 kernel: pnp: PnP ACPI: found 5 devices May 14 09:25:55.961309 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 09:25:55.961321 kernel: NET: Registered PF_INET protocol family May 14 09:25:55.961330 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 09:25:55.961339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 09:25:55.961349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 09:25:55.961358 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 09:25:55.961367 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 09:25:55.961377 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 09:25:55.961386 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 09:25:55.961395 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 09:25:55.961406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 09:25:55.961415 kernel: NET: Registered PF_XDP protocol family May 14 09:25:55.961498 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 09:25:55.961575 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 09:25:55.961650 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 09:25:55.961723 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 14 09:25:55.965680 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 14 09:25:55.965824 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 14 09:25:55.965931 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 09:25:55.965946 kernel: PCI: CLS 0 bytes, default 64 May 14 09:25:55.965957 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 14 09:25:55.965967 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 14 09:25:55.965977 kernel: Initialise system trusted keyrings May 14 09:25:55.965987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 09:25:55.965998 kernel: Key type asymmetric registered May 14 09:25:55.966008 kernel: Asymmetric key parser 'x509' registered May 14 09:25:55.966018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 09:25:55.966031 kernel: io scheduler mq-deadline registered May 14 09:25:55.966041 kernel: io scheduler kyber registered May 14 09:25:55.966050 kernel: io scheduler bfq registered May 14 09:25:55.966060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 09:25:55.966071 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 14 09:25:55.966081 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 14 09:25:55.966091 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 14 09:25:55.966101 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 14 09:25:55.966111 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 09:25:55.966123 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 09:25:55.966133 kernel: random: crng init done May 14 09:25:55.966142 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 09:25:55.966152 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 09:25:55.966162 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 09:25:55.966172 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 09:25:55.966266 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 09:25:55.966352 kernel: rtc_cmos 00:04: registered as rtc0 May 14 09:25:55.966440 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T09:25:55 UTC (1747214755) May 14 09:25:55.966524 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 09:25:55.966538 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 09:25:55.966548 kernel: NET: Registered PF_INET6 protocol family May 14 09:25:55.966558 kernel: Segment Routing with IPv6 May 14 09:25:55.966568 kernel: In-situ OAM (IOAM) with IPv6 May 14 09:25:55.966577 kernel: NET: Registered PF_PACKET protocol family May 14 09:25:55.966587 kernel: Key type dns_resolver registered May 14 09:25:55.966597 kernel: IPI shorthand broadcast: enabled May 14 09:25:55.966612 kernel: sched_clock: Marking stable (3623007789, 189159031)->(3850756463, -38589643) May 14 09:25:55.966627 kernel: registered taskstats version 1 May 14 09:25:55.966642 kernel: Loading compiled-in X.509 certificates May 14 09:25:55.966657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: de56839f264dfa1264ece2be0efda2f53967cc2a' May 14 09:25:55.966670 kernel: Demotion targets for Node 0: null May 14 09:25:55.966685 kernel: Key type .fscrypt registered May 14 09:25:55.966696 kernel: Key type fscrypt-provisioning registered May 14 09:25:55.966706 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 09:25:55.966720 kernel: ima: Allocated hash algorithm: sha1 May 14 09:25:55.966730 kernel: ima: No architecture policies found May 14 09:25:55.966739 kernel: clk: Disabling unused clocks May 14 09:25:55.966749 kernel: Warning: unable to open an initial console. May 14 09:25:55.966759 kernel: Freeing unused kernel image (initmem) memory: 54416K May 14 09:25:55.968917 kernel: Write protecting the kernel read-only data: 24576k May 14 09:25:55.968937 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 09:25:55.968948 kernel: Run /init as init process May 14 09:25:55.968960 kernel: with arguments: May 14 09:25:55.968977 kernel: /init May 14 09:25:55.968988 kernel: with environment: May 14 09:25:55.968999 kernel: HOME=/ May 14 09:25:55.969009 kernel: TERM=linux May 14 09:25:55.969020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 09:25:55.969033 systemd[1]: Successfully made /usr/ read-only. May 14 09:25:55.969050 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 09:25:55.969077 systemd[1]: Detected virtualization kvm. May 14 09:25:55.969088 systemd[1]: Detected architecture x86-64. May 14 09:25:55.969101 systemd[1]: Running in initrd. May 14 09:25:55.969113 systemd[1]: No hostname configured, using default hostname. May 14 09:25:55.969125 systemd[1]: Hostname set to . May 14 09:25:55.969137 systemd[1]: Initializing machine ID from VM UUID. May 14 09:25:55.969148 systemd[1]: Queued start job for default target initrd.target. May 14 09:25:55.969166 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 09:25:55.969183 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 09:25:55.969202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 09:25:55.969219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 09:25:55.969234 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 09:25:55.969268 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 09:25:55.969290 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 09:25:55.969303 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 09:25:55.969315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 09:25:55.969328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 09:25:55.969340 systemd[1]: Reached target paths.target - Path Units. May 14 09:25:55.969353 systemd[1]: Reached target slices.target - Slice Units. May 14 09:25:55.969364 systemd[1]: Reached target swap.target - Swaps. May 14 09:25:55.969379 systemd[1]: Reached target timers.target - Timer Units. May 14 09:25:55.969392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 09:25:55.969407 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 09:25:55.969420 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 09:25:55.969432 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 09:25:55.969444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 09:25:55.969456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 09:25:55.969469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 09:25:55.969481 systemd[1]: Reached target sockets.target - Socket Units. May 14 09:25:55.969494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 09:25:55.969509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 09:25:55.969521 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 09:25:55.969536 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 09:25:55.969548 systemd[1]: Starting systemd-fsck-usr.service... May 14 09:25:55.969560 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 09:25:55.969572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 09:25:55.969587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:25:55.969599 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 09:25:55.969612 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 09:25:55.969624 systemd[1]: Finished systemd-fsck-usr.service. May 14 09:25:55.969666 systemd-journald[212]: Collecting audit messages is disabled. May 14 09:25:55.969698 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 09:25:55.969713 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 09:25:55.969728 systemd-journald[212]: Journal started May 14 09:25:55.969759 systemd-journald[212]: Runtime Journal (/run/log/journal/adf7ebdebda74ee2b3a4704d84d5e1a1) is 8M, max 78.5M, 70.5M free. May 14 09:25:55.970904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 09:25:55.948724 systemd-modules-load[214]: Inserted module 'overlay' May 14 09:25:56.023144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 09:25:56.023170 kernel: Bridge firewalling registered May 14 09:25:56.023195 systemd[1]: Started systemd-journald.service - Journal Service. May 14 09:25:55.988004 systemd-modules-load[214]: Inserted module 'br_netfilter' May 14 09:25:56.029132 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 09:25:56.030062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:25:56.038944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 09:25:56.044928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 09:25:56.048546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 09:25:56.052572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 09:25:56.056421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 09:25:56.069207 systemd-tmpfiles[235]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 09:25:56.076451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 09:25:56.078698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 09:25:56.081643 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 09:25:56.084938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 09:25:56.109142 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 09:25:56.133444 systemd-resolved[253]: Positive Trust Anchors: May 14 09:25:56.133457 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 09:25:56.133500 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 09:25:56.137145 systemd-resolved[253]: Defaulting to hostname 'linux'. May 14 09:25:56.138117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 09:25:56.140451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 09:25:56.226854 kernel: SCSI subsystem initialized May 14 09:25:56.238847 kernel: Loading iSCSI transport class v2.0-870. May 14 09:25:56.251855 kernel: iscsi: registered transport (tcp) May 14 09:25:56.276978 kernel: iscsi: registered transport (qla4xxx) May 14 09:25:56.277091 kernel: QLogic iSCSI HBA Driver May 14 09:25:56.302168 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 09:25:56.342401 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 09:25:56.348027 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 09:25:56.447358 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 09:25:56.452004 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 09:25:56.548876 kernel: raid6: sse2x4 gen() 5541 MB/s May 14 09:25:56.567887 kernel: raid6: sse2x2 gen() 15084 MB/s May 14 09:25:56.586710 kernel: raid6: sse2x1 gen() 7311 MB/s May 14 09:25:56.586763 kernel: raid6: using algorithm sse2x2 gen() 15084 MB/s May 14 09:25:56.605615 kernel: raid6: .... xor() 9462 MB/s, rmw enabled May 14 09:25:56.605669 kernel: raid6: using ssse3x2 recovery algorithm May 14 09:25:56.627998 kernel: xor: measuring software checksum speed May 14 09:25:56.628063 kernel: prefetch64-sse : 16736 MB/sec May 14 09:25:56.630643 kernel: generic_sse : 16115 MB/sec May 14 09:25:56.630696 kernel: xor: using function: prefetch64-sse (16736 MB/sec) May 14 09:25:56.826850 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 09:25:56.834055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 09:25:56.839042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 09:25:56.864332 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 14 09:25:56.869572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 09:25:56.876033 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 09:25:56.896679 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation May 14 09:25:56.927879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 09:25:56.931462 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 09:25:57.002548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 09:25:57.008003 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 09:25:57.086800 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 14 09:25:57.126997 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 14 09:25:57.129569 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 09:25:57.129587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 09:25:57.129611 kernel: GPT:17805311 != 20971519 May 14 09:25:57.129623 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 09:25:57.129635 kernel: GPT:17805311 != 20971519 May 14 09:25:57.129653 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 09:25:57.129665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 09:25:57.127846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 09:25:57.127911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:25:57.129704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:25:57.131181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:25:57.133668 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 09:25:57.151811 kernel: libata version 3.00 loaded. May 14 09:25:57.162813 kernel: ata_piix 0000:00:01.1: version 2.13 May 14 09:25:57.183211 kernel: scsi host0: ata_piix May 14 09:25:57.183338 kernel: scsi host1: ata_piix May 14 09:25:57.183439 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 May 14 09:25:57.183452 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 May 14 09:25:57.204545 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 09:25:57.226486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:25:57.252514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 09:25:57.263625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 09:25:57.272213 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 09:25:57.272848 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 09:25:57.274868 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 09:25:57.298561 disk-uuid[563]: Primary Header is updated. May 14 09:25:57.298561 disk-uuid[563]: Secondary Entries is updated. May 14 09:25:57.298561 disk-uuid[563]: Secondary Header is updated. May 14 09:25:57.307876 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 09:25:57.426846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 09:25:57.462035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 09:25:57.463302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 09:25:57.464520 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 09:25:57.468995 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 09:25:57.509336 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 09:25:58.324997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 09:25:58.326404 disk-uuid[564]: The operation has completed successfully. May 14 09:25:58.406068 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 09:25:58.406923 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 09:25:58.471529 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 09:25:58.507069 sh[589]: Success May 14 09:25:58.558036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 09:25:58.558136 kernel: device-mapper: uevent: version 1.0.3 May 14 09:25:58.561628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 09:25:58.584809 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" May 14 09:25:58.684998 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 09:25:58.691928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 09:25:58.699634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 09:25:58.727959 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 09:25:58.728026 kernel: BTRFS: device fsid 522ba959-9153-4a92-926e-3277bc1060e7 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (601) May 14 09:25:58.734319 kernel: BTRFS info (device dm-0): first mount of filesystem 522ba959-9153-4a92-926e-3277bc1060e7 May 14 09:25:58.734378 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 09:25:58.736187 kernel: BTRFS info (device dm-0): using free-space-tree May 14 09:25:58.754488 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 09:25:58.756390 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 09:25:58.758022 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 09:25:58.759713 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 09:25:58.763715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 09:25:58.787794 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (625) May 14 09:25:58.793491 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 09:25:58.793586 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 09:25:58.793618 kernel: BTRFS info (device vda6): using free-space-tree May 14 09:25:58.813830 kernel: BTRFS info (device vda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 09:25:58.815622 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 09:25:58.821046 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 09:25:58.882675 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 09:25:58.885389 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 09:25:58.938594 systemd-networkd[773]: lo: Link UP May 14 09:25:58.938799 systemd-networkd[773]: lo: Gained carrier May 14 09:25:58.940899 systemd-networkd[773]: Enumeration completed May 14 09:25:58.941081 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 09:25:58.941499 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 09:25:58.941504 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 09:25:58.943959 systemd-networkd[773]: eth0: Link UP May 14 09:25:58.943963 systemd-networkd[773]: eth0: Gained carrier May 14 09:25:58.943971 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 09:25:58.950209 systemd[1]: Reached target network.target - Network. May 14 09:25:58.955862 systemd-networkd[773]: eth0: DHCPv4 address 172.24.4.30/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 14 09:25:59.044615 ignition[684]: Ignition 2.21.0 May 14 09:25:59.044628 ignition[684]: Stage: fetch-offline May 14 09:25:59.044660 ignition[684]: no configs at "/usr/lib/ignition/base.d" May 14 09:25:59.044668 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:25:59.047089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 09:25:59.044752 ignition[684]: parsed url from cmdline: "" May 14 09:25:59.044756 ignition[684]: no config URL provided May 14 09:25:59.044781 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" May 14 09:25:59.045538 ignition[684]: no config at "/usr/lib/ignition/user.ign" May 14 09:25:59.049900 systemd-resolved[253]: Detected conflict on linux IN A 172.24.4.30 May 14 09:25:59.045544 ignition[684]: failed to fetch config: resource requires networking May 14 09:25:59.049918 systemd-resolved[253]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 14 09:25:59.045716 ignition[684]: Ignition finished successfully May 14 09:25:59.051027 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 09:25:59.078711 ignition[782]: Ignition 2.21.0 May 14 09:25:59.078860 ignition[782]: Stage: fetch May 14 09:25:59.079005 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 14 09:25:59.079015 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:25:59.079100 ignition[782]: parsed url from cmdline: "" May 14 09:25:59.079103 ignition[782]: no config URL provided May 14 09:25:59.079108 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" May 14 09:25:59.079116 ignition[782]: no config at "/usr/lib/ignition/user.ign" May 14 09:25:59.079212 ignition[782]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 14 09:25:59.079262 ignition[782]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 14 09:25:59.079309 ignition[782]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 14 09:25:59.440650 ignition[782]: GET result: OK May 14 09:25:59.440834 ignition[782]: parsing config with SHA512: 137696adbe81e1ddc15adcd436a599148abae34a6c36e84cfcd93b2cab6deeba3311f0c87781979a9fcba6dad47159bd2cf52822f5546bc1eacc6aebd9781561 May 14 09:25:59.451638 unknown[782]: fetched base config from "system" May 14 09:25:59.451655 unknown[782]: fetched base config from "system" May 14 09:25:59.452279 ignition[782]: fetch: fetch complete May 14 09:25:59.451664 unknown[782]: fetched user config from "openstack" May 14 09:25:59.452288 ignition[782]: fetch: fetch passed May 14 09:25:59.455990 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 09:25:59.452341 ignition[782]: Ignition finished successfully May 14 09:25:59.458130 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 09:25:59.493982 ignition[789]: Ignition 2.21.0 May 14 09:25:59.494010 ignition[789]: Stage: kargs May 14 09:25:59.494344 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 14 09:25:59.494367 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:25:59.497335 ignition[789]: kargs: kargs passed May 14 09:25:59.501230 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 09:25:59.497968 ignition[789]: Ignition finished successfully May 14 09:25:59.505320 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 09:25:59.557902 ignition[796]: Ignition 2.21.0 May 14 09:25:59.557936 ignition[796]: Stage: disks May 14 09:25:59.558238 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 14 09:25:59.558261 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:25:59.563231 ignition[796]: disks: disks passed May 14 09:25:59.563342 ignition[796]: Ignition finished successfully May 14 09:25:59.566327 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 09:25:59.570177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 09:25:59.571459 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 09:25:59.574329 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 09:25:59.577109 systemd[1]: Reached target sysinit.target - System Initialization. May 14 09:25:59.579485 systemd[1]: Reached target basic.target - Basic System. May 14 09:25:59.583959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 09:25:59.634978 systemd-fsck[805]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 14 09:25:59.647537 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 09:25:59.650925 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 09:25:59.848778 kernel: EXT4-fs (vda9): mounted filesystem 7fda6268-ffdc-406a-8662-dffb0e9a24fa r/w with ordered data mode. Quota mode: none. May 14 09:25:59.850933 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 09:25:59.857913 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 09:25:59.862606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 09:25:59.876930 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 09:25:59.880131 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 09:25:59.883518 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 14 09:25:59.887362 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 09:25:59.887451 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 09:25:59.898188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 09:25:59.902988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 09:25:59.934666 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (813) May 14 09:25:59.934705 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 09:25:59.934728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 09:25:59.934750 kernel: BTRFS info (device vda6): using free-space-tree May 14 09:25:59.940152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 09:26:00.041844 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:00.043065 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 14 09:26:00.051133 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 14 09:26:00.057167 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory May 14 09:26:00.066069 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory May 14 09:26:00.127026 systemd-networkd[773]: eth0: Gained IPv6LL May 14 09:26:00.243448 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 09:26:00.247866 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 09:26:00.251984 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 09:26:00.277174 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 09:26:00.283948 kernel: BTRFS info (device vda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 09:26:00.300967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 09:26:00.326318 ignition[931]: INFO : Ignition 2.21.0 May 14 09:26:00.326318 ignition[931]: INFO : Stage: mount May 14 09:26:00.327448 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 09:26:00.327448 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:26:00.329934 ignition[931]: INFO : mount: mount passed May 14 09:26:00.329934 ignition[931]: INFO : Ignition finished successfully May 14 09:26:00.330044 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 09:26:01.077854 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:03.088848 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:07.100835 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:07.109837 coreos-metadata[815]: May 14 09:26:07.109 WARN failed to locate config-drive, using the metadata service API instead May 14 09:26:07.152548 coreos-metadata[815]: May 14 09:26:07.152 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 14 09:26:07.166405 coreos-metadata[815]: May 14 09:26:07.166 INFO Fetch successful May 14 09:26:07.170224 coreos-metadata[815]: May 14 09:26:07.168 INFO wrote hostname ci-4334-0-0-n-74e04034e7.novalocal to /sysroot/etc/hostname May 14 09:26:07.170335 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 14 09:26:07.170534 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 14 09:26:07.179941 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 09:26:07.214882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 09:26:07.252952 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (947) May 14 09:26:07.265842 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 09:26:07.265934 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 09:26:07.265965 kernel: BTRFS info (device vda6): using free-space-tree May 14 09:26:07.279117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 09:26:07.327034 ignition[965]: INFO : Ignition 2.21.0 May 14 09:26:07.327034 ignition[965]: INFO : Stage: files May 14 09:26:07.330443 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 09:26:07.330443 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:26:07.330443 ignition[965]: DEBUG : files: compiled without relabeling support, skipping May 14 09:26:07.349456 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 09:26:07.349456 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 09:26:07.353494 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 09:26:07.353494 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 09:26:07.353494 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 09:26:07.352876 unknown[965]: wrote ssh authorized keys file for user: core May 14 09:26:07.432817 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 09:26:07.435655 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 09:26:07.526151 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 09:26:07.876116 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 09:26:07.876116 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 09:26:07.881276 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 09:26:07.903629 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 09:26:07.903629 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 09:26:07.903629 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 09:26:08.570005 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 09:26:10.227125 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 09:26:10.227125 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 09:26:10.232058 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 09:26:10.235093 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 09:26:10.235093 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 09:26:10.235093 ignition[965]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 09:26:10.243024 ignition[965]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 09:26:10.243024 ignition[965]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 09:26:10.243024 ignition[965]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 09:26:10.243024 ignition[965]: INFO : files: files passed May 14 09:26:10.243024 ignition[965]: INFO : Ignition finished successfully May 14 09:26:10.237237 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 09:26:10.241905 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 09:26:10.245484 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 09:26:10.258097 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 09:26:10.258202 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 09:26:10.281726 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 09:26:10.281726 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 09:26:10.287559 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 09:26:10.286569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 09:26:10.288602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 09:26:10.291517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 09:26:10.360979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 09:26:10.362312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 09:26:10.364279 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 09:26:10.365884 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 09:26:10.367023 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 09:26:10.368385 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 09:26:10.408230 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 09:26:10.412906 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 09:26:10.443138 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 09:26:10.444734 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 09:26:10.446858 systemd[1]: Stopped target timers.target - Timer Units. May 14 09:26:10.448740 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 09:26:10.449173 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 09:26:10.451857 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 09:26:10.453869 systemd[1]: Stopped target basic.target - Basic System. May 14 09:26:10.455716 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 09:26:10.457909 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 09:26:10.459517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 09:26:10.462647 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 09:26:10.465712 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 09:26:10.468891 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 09:26:10.471963 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 09:26:10.475287 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 09:26:10.478107 systemd[1]: Stopped target swap.target - Swaps. May 14 09:26:10.480402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 09:26:10.480700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 09:26:10.484015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 09:26:10.485992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 09:26:10.488989 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 09:26:10.489837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 09:26:10.492072 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 09:26:10.492449 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 09:26:10.496009 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 09:26:10.496434 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 09:26:10.500109 systemd[1]: ignition-files.service: Deactivated successfully. May 14 09:26:10.500485 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 09:26:10.505202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 09:26:10.508558 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 09:26:10.509025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 09:26:10.526216 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 09:26:10.527959 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 09:26:10.528295 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 09:26:10.536084 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 09:26:10.536680 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 09:26:10.553707 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 09:26:10.554065 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 09:26:10.561786 ignition[1019]: INFO : Ignition 2.21.0 May 14 09:26:10.561786 ignition[1019]: INFO : Stage: umount May 14 09:26:10.564786 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 09:26:10.564786 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 14 09:26:10.566190 ignition[1019]: INFO : umount: umount passed May 14 09:26:10.566793 ignition[1019]: INFO : Ignition finished successfully May 14 09:26:10.568659 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 09:26:10.568849 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 09:26:10.569628 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 09:26:10.569684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 09:26:10.570315 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 09:26:10.570362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 09:26:10.571412 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 09:26:10.571455 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 09:26:10.572588 systemd[1]: Stopped target network.target - Network. May 14 09:26:10.573575 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 09:26:10.573626 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 09:26:10.574730 systemd[1]: Stopped target paths.target - Path Units. May 14 09:26:10.576517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 09:26:10.582449 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 09:26:10.583072 systemd[1]: Stopped target slices.target - Slice Units. May 14 09:26:10.587394 systemd[1]: Stopped target sockets.target - Socket Units. May 14 09:26:10.589422 systemd[1]: iscsid.socket: Deactivated successfully. May 14 09:26:10.589508 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 09:26:10.591418 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 09:26:10.591495 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 09:26:10.593055 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 09:26:10.593193 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 09:26:10.595856 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 09:26:10.595959 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 09:26:10.605391 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 09:26:10.606537 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 09:26:10.611598 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 09:26:10.614698 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 09:26:10.615366 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 09:26:10.617948 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 09:26:10.618117 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 09:26:10.618218 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 09:26:10.620170 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 09:26:10.620694 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 09:26:10.622034 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 09:26:10.622072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 09:26:10.624922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 09:26:10.625415 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 09:26:10.625488 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 09:26:10.626081 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 09:26:10.626124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 09:26:10.629848 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 09:26:10.630525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 09:26:10.631707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 09:26:10.631748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 09:26:10.633949 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 09:26:10.636500 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 09:26:10.636572 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 09:26:10.645470 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 09:26:10.647184 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 09:26:10.648096 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 09:26:10.648135 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 09:26:10.649379 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 09:26:10.649411 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 09:26:10.650506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 09:26:10.650551 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 09:26:10.652161 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 09:26:10.652209 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 09:26:10.653432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 09:26:10.653476 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 09:26:10.655979 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 09:26:10.658050 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 09:26:10.658118 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 09:26:10.660075 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 09:26:10.660150 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 09:26:10.662115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 09:26:10.662163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:26:10.670099 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 09:26:10.670169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 09:26:10.670214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 09:26:10.670578 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 09:26:10.670689 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 09:26:10.671724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 09:26:10.671866 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 09:26:10.845314 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 09:26:10.845669 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 09:26:10.849512 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 09:26:10.852476 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 09:26:10.852641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 09:26:10.857385 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 09:26:10.924942 systemd[1]: Switching root. May 14 09:26:10.997025 systemd-journald[212]: Journal stopped May 14 09:26:12.660684 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 14 09:26:12.660737 kernel: SELinux: policy capability network_peer_controls=1 May 14 09:26:12.660757 kernel: SELinux: policy capability open_perms=1 May 14 09:26:12.660798 kernel: SELinux: policy capability extended_socket_class=1 May 14 09:26:12.660812 kernel: SELinux: policy capability always_check_network=0 May 14 09:26:12.660827 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 09:26:12.660839 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 09:26:12.660852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 09:26:12.660864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 09:26:12.660875 kernel: SELinux: policy capability userspace_initial_context=0 May 14 09:26:12.660890 kernel: audit: type=1403 audit(1747214771.475:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 09:26:12.660907 systemd[1]: Successfully loaded SELinux policy in 58.459ms. May 14 09:26:12.660923 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 28.114ms. May 14 09:26:12.660937 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 09:26:12.660951 systemd[1]: Detected virtualization kvm. May 14 09:26:12.660963 systemd[1]: Detected architecture x86-64. May 14 09:26:12.660976 systemd[1]: Detected first boot. May 14 09:26:12.660989 systemd[1]: Hostname set to . May 14 09:26:12.661002 systemd[1]: Initializing machine ID from VM UUID. May 14 09:26:12.661015 zram_generator::config[1065]: No configuration found. May 14 09:26:12.661032 kernel: Guest personality initialized and is inactive May 14 09:26:12.661044 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 09:26:12.661056 kernel: Initialized host personality May 14 09:26:12.661068 kernel: NET: Registered PF_VSOCK protocol family May 14 09:26:12.661081 systemd[1]: Populated /etc with preset unit settings. May 14 09:26:12.661094 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 09:26:12.661108 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 09:26:12.661123 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 09:26:12.661138 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 09:26:12.661152 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 09:26:12.661165 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 09:26:12.661178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 09:26:12.661191 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 09:26:12.661216 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 09:26:12.661230 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 09:26:12.661245 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 09:26:12.661259 systemd[1]: Created slice user.slice - User and Session Slice. May 14 09:26:12.661272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 09:26:12.661284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 09:26:12.661296 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 09:26:12.661309 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 09:26:12.661321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 09:26:12.661336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 09:26:12.661348 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 09:26:12.661362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 09:26:12.661374 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 09:26:12.661386 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 09:26:12.661399 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 09:26:12.661411 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 09:26:12.661423 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 09:26:12.661435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 09:26:12.661448 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 09:26:12.661461 systemd[1]: Reached target slices.target - Slice Units. May 14 09:26:12.661473 systemd[1]: Reached target swap.target - Swaps. May 14 09:26:12.661486 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 09:26:12.661498 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 09:26:12.661510 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 09:26:12.661523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 09:26:12.661535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 09:26:12.661547 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 09:26:12.661559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 09:26:12.661573 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 09:26:12.661586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 09:26:12.661598 systemd[1]: Mounting media.mount - External Media Directory... May 14 09:26:12.661611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:12.661623 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 09:26:12.661635 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 09:26:12.661647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 09:26:12.661660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 09:26:12.661674 systemd[1]: Reached target machines.target - Containers. May 14 09:26:12.661687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 09:26:12.661699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 09:26:12.661711 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 09:26:12.661723 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 09:26:12.661736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 09:26:12.661748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 09:26:12.661761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 09:26:12.663676 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 09:26:12.663696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 09:26:12.663711 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 09:26:12.663724 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 09:26:12.663738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 09:26:12.663751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 09:26:12.663779 systemd[1]: Stopped systemd-fsck-usr.service. May 14 09:26:12.663795 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 09:26:12.663808 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 09:26:12.663824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 09:26:12.663838 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 09:26:12.663851 kernel: fuse: init (API version 7.41) May 14 09:26:12.663865 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 09:26:12.663879 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 09:26:12.663894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 09:26:12.663907 systemd[1]: verity-setup.service: Deactivated successfully. May 14 09:26:12.663919 systemd[1]: Stopped verity-setup.service. May 14 09:26:12.663933 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:12.663947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 09:26:12.663960 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 09:26:12.663975 systemd[1]: Mounted media.mount - External Media Directory. May 14 09:26:12.663988 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 09:26:12.664000 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 09:26:12.664014 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 09:26:12.664026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 09:26:12.664039 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 09:26:12.664052 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 09:26:12.664065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 09:26:12.664079 kernel: ACPI: bus type drm_connector registered May 14 09:26:12.664092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 09:26:12.664105 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 09:26:12.664117 kernel: loop: module loaded May 14 09:26:12.664130 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 09:26:12.664144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 09:26:12.664157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 09:26:12.664170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 09:26:12.664184 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 09:26:12.664198 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 09:26:12.664212 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 09:26:12.664224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 09:26:12.664238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 09:26:12.664252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 09:26:12.664267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 09:26:12.664280 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 09:26:12.664293 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 09:26:12.664328 systemd-journald[1152]: Collecting audit messages is disabled. May 14 09:26:12.664357 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 09:26:12.664371 systemd-journald[1152]: Journal started May 14 09:26:12.664398 systemd-journald[1152]: Runtime Journal (/run/log/journal/adf7ebdebda74ee2b3a4704d84d5e1a1) is 8M, max 78.5M, 70.5M free. May 14 09:26:12.242592 systemd[1]: Queued start job for default target multi-user.target. May 14 09:26:12.261969 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 09:26:12.262368 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 09:26:12.687388 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 09:26:12.687430 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 09:26:12.689961 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 09:26:12.695014 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 09:26:12.698784 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 09:26:12.698826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 09:26:12.710847 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 09:26:12.710898 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 09:26:12.727787 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 09:26:12.731804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 09:26:12.735824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 09:26:12.746872 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 09:26:12.753828 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 09:26:12.753890 systemd[1]: Started systemd-journald.service - Journal Service. May 14 09:26:12.757898 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 09:26:12.758523 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 09:26:12.759415 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 09:26:12.761184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 09:26:12.776222 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 09:26:12.779380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 09:26:12.794857 kernel: loop0: detected capacity change from 0 to 146240 May 14 09:26:12.791465 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 09:26:12.811698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 09:26:12.812980 systemd-journald[1152]: Time spent on flushing to /var/log/journal/adf7ebdebda74ee2b3a4704d84d5e1a1 is 30.347ms for 981 entries. May 14 09:26:12.812980 systemd-journald[1152]: System Journal (/var/log/journal/adf7ebdebda74ee2b3a4704d84d5e1a1) is 8M, max 584.8M, 576.8M free. May 14 09:26:12.858867 systemd-journald[1152]: Received client request to flush runtime journal. May 14 09:26:12.862889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 09:26:12.878818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 09:26:12.883106 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 09:26:12.896798 kernel: loop1: detected capacity change from 0 to 113872 May 14 09:26:12.900685 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 09:26:12.902738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 09:26:12.944569 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 09:26:12.944587 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 09:26:12.952376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 09:26:12.960805 kernel: loop2: detected capacity change from 0 to 218376 May 14 09:26:13.017811 kernel: loop3: detected capacity change from 0 to 8 May 14 09:26:13.032946 kernel: loop4: detected capacity change from 0 to 146240 May 14 09:26:13.113813 kernel: loop5: detected capacity change from 0 to 113872 May 14 09:26:13.144821 kernel: loop6: detected capacity change from 0 to 218376 May 14 09:26:13.210934 kernel: loop7: detected capacity change from 0 to 8 May 14 09:26:13.211707 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 14 09:26:13.212992 (sd-merge)[1227]: Merged extensions into '/usr'. May 14 09:26:13.220434 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... May 14 09:26:13.220455 systemd[1]: Reloading... May 14 09:26:13.310799 zram_generator::config[1249]: No configuration found. May 14 09:26:13.464150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 09:26:13.569798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 09:26:13.570148 systemd[1]: Reloading finished in 348 ms. May 14 09:26:13.595680 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 09:26:13.603015 systemd[1]: Starting ensure-sysext.service... May 14 09:26:13.606710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 09:26:13.638901 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... May 14 09:26:13.638917 systemd[1]: Reloading... May 14 09:26:13.653883 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 09:26:13.654238 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 09:26:13.654638 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 09:26:13.655228 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 09:26:13.656237 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 09:26:13.656598 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. May 14 09:26:13.656718 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. May 14 09:26:13.665297 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. May 14 09:26:13.665425 systemd-tmpfiles[1309]: Skipping /boot May 14 09:26:13.680520 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. May 14 09:26:13.680631 systemd-tmpfiles[1309]: Skipping /boot May 14 09:26:13.705270 zram_generator::config[1335]: No configuration found. May 14 09:26:13.766005 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 09:26:13.847987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 09:26:13.952311 systemd[1]: Reloading finished in 313 ms. May 14 09:26:13.961056 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 09:26:13.962140 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 09:26:13.963053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 09:26:13.979337 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 09:26:13.983013 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 09:26:13.985626 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 09:26:13.990933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 09:26:13.993066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 09:26:14.000935 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 09:26:14.011983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.012185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 09:26:14.014801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 09:26:14.018787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 09:26:14.027431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 09:26:14.028922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 09:26:14.029047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 09:26:14.029166 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.034795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.034991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 09:26:14.035165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 09:26:14.035270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 09:26:14.042310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 09:26:14.043458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.051821 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 09:26:14.059151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.059429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 09:26:14.068024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 09:26:14.069489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 09:26:14.069623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 09:26:14.069823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 09:26:14.082471 systemd[1]: Finished ensure-sysext.service. May 14 09:26:14.085756 systemd-udevd[1400]: Using default interface naming scheme 'v255'. May 14 09:26:14.090627 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 09:26:14.092072 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 09:26:14.092574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 09:26:14.096819 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 09:26:14.097729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 09:26:14.097939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 09:26:14.100175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 09:26:14.104558 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 09:26:14.112480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 09:26:14.112690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 09:26:14.113520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 09:26:14.117264 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 09:26:14.121302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 09:26:14.144833 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 09:26:14.150829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 09:26:14.153741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 09:26:14.161004 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 09:26:14.161618 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 09:26:14.177730 augenrules[1461]: No rules May 14 09:26:14.179794 systemd[1]: audit-rules.service: Deactivated successfully. May 14 09:26:14.180823 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 09:26:14.188669 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 09:26:14.281796 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 09:26:14.363171 systemd-networkd[1447]: lo: Link UP May 14 09:26:14.363181 systemd-networkd[1447]: lo: Gained carrier May 14 09:26:14.372947 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 09:26:14.373716 systemd[1]: Reached target time-set.target - System Time Set. May 14 09:26:14.384237 systemd-networkd[1447]: Enumeration completed May 14 09:26:14.384329 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 09:26:14.387985 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 09:26:14.390952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 09:26:14.417672 systemd-resolved[1399]: Positive Trust Anchors: May 14 09:26:14.418650 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 09:26:14.418757 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 09:26:14.425796 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 09:26:14.426983 systemd-resolved[1399]: Using system hostname 'ci-4334-0-0-n-74e04034e7.novalocal'. May 14 09:26:14.429382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 09:26:14.430880 systemd[1]: Reached target network.target - Network. May 14 09:26:14.432167 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 09:26:14.433292 systemd[1]: Reached target sysinit.target - System Initialization. May 14 09:26:14.434104 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 09:26:14.435845 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 09:26:14.436380 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 09:26:14.439042 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 09:26:14.439740 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 09:26:14.440338 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 09:26:14.440983 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 09:26:14.441022 systemd[1]: Reached target paths.target - Path Units. May 14 09:26:14.441570 systemd[1]: Reached target timers.target - Timer Units. May 14 09:26:14.443130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 09:26:14.445138 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 09:26:14.449659 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 09:26:14.451344 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 09:26:14.451941 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 09:26:14.460404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 09:26:14.461398 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 09:26:14.464124 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 09:26:14.469685 systemd[1]: Reached target sockets.target - Socket Units. May 14 09:26:14.476025 systemd[1]: Reached target basic.target - Basic System. May 14 09:26:14.476542 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 09:26:14.476567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 09:26:14.478209 systemd[1]: Starting containerd.service - containerd container runtime... May 14 09:26:14.481072 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 09:26:14.484266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 09:26:14.485833 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 09:26:14.490323 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 09:26:14.499287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 09:26:14.499900 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 09:26:14.502835 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 09:26:14.504934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 09:26:14.511799 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:14.508913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 09:26:14.516057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 09:26:14.517965 jq[1496]: false May 14 09:26:14.519953 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 09:26:14.529621 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 09:26:14.531539 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 09:26:14.533163 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 09:26:14.535900 systemd[1]: Starting update-engine.service - Update Engine... May 14 09:26:14.537579 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing passwd entry cache May 14 09:26:14.538832 oslogin_cache_refresh[1499]: Refreshing passwd entry cache May 14 09:26:14.542507 oslogin_cache_refresh[1499]: Failure getting users, quitting May 14 09:26:14.543889 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting users, quitting May 14 09:26:14.543889 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 09:26:14.543889 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing group entry cache May 14 09:26:14.542522 oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 09:26:14.542560 oslogin_cache_refresh[1499]: Refreshing group entry cache May 14 09:26:14.546307 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting groups, quitting May 14 09:26:14.546352 oslogin_cache_refresh[1499]: Failure getting groups, quitting May 14 09:26:14.546442 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 09:26:14.546368 oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 09:26:14.546882 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 09:26:14.559468 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 09:26:14.561081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 09:26:14.561278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 09:26:14.561513 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 09:26:14.561684 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 09:26:14.592963 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 09:26:14.593908 extend-filesystems[1497]: Found loop4 May 14 09:26:14.593908 extend-filesystems[1497]: Found loop5 May 14 09:26:14.593908 extend-filesystems[1497]: Found loop6 May 14 09:26:14.593908 extend-filesystems[1497]: Found loop7 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda May 14 09:26:14.593908 extend-filesystems[1497]: Found vda1 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda2 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda3 May 14 09:26:14.593908 extend-filesystems[1497]: Found usr May 14 09:26:14.593908 extend-filesystems[1497]: Found vda4 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda6 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda7 May 14 09:26:14.593908 extend-filesystems[1497]: Found vda9 May 14 09:26:14.593164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 09:26:14.632070 update_engine[1508]: I20250514 09:26:14.625594 1508 main.cc:92] Flatcar Update Engine starting May 14 09:26:14.632264 jq[1509]: true May 14 09:26:14.607155 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 09:26:14.608923 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 09:26:14.632445 systemd[1]: motdgen.service: Deactivated successfully. May 14 09:26:14.632627 jq[1523]: true May 14 09:26:14.632628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 09:26:14.648223 kernel: mousedev: PS/2 mouse device common for all mice May 14 09:26:14.648293 tar[1513]: linux-amd64/LICENSE May 14 09:26:14.648293 tar[1513]: linux-amd64/helm May 14 09:26:14.649123 dbus-daemon[1494]: [system] SELinux support is enabled May 14 09:26:14.650015 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 09:26:14.660416 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 09:26:14.660443 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 09:26:14.661750 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 09:26:14.661788 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 09:26:14.668451 systemd-logind[1506]: New seat seat0. May 14 09:26:14.670330 systemd[1]: Started systemd-logind.service - User Login Management. May 14 09:26:14.673128 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 09:26:14.676238 systemd[1]: Started update-engine.service - Update Engine. May 14 09:26:14.677027 update_engine[1508]: I20250514 09:26:14.676973 1508 update_check_scheduler.cc:74] Next update check in 7m10s May 14 09:26:14.692783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 09:26:14.680995 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 09:26:14.710816 kernel: ACPI: button: Power Button [PWRF] May 14 09:26:14.713919 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 09:26:14.713929 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 09:26:14.718366 systemd-networkd[1447]: eth0: Link UP May 14 09:26:14.718567 systemd-networkd[1447]: eth0: Gained carrier May 14 09:26:14.718587 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 09:26:14.737521 systemd-networkd[1447]: eth0: DHCPv4 address 172.24.4.30/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 14 09:26:14.738503 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 09:26:14.738976 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 09:26:14.782574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 09:26:14.788197 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 09:26:14.821010 bash[1551]: Updated "/home/core/.ssh/authorized_keys" May 14 09:26:14.825847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 09:26:14.860805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 09:26:14.864361 systemd[1]: Starting sshkeys.service... May 14 09:26:14.894803 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 14 09:26:14.966822 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 09:26:14.981317 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:14.906178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 09:26:14.927506 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 09:26:14.933060 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 09:26:15.006959 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 09:26:15.135138 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:26:15.165041 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 09:26:15.180220 systemd-logind[1506]: Watching system buttons on /dev/input/event2 (Power Button) May 14 09:26:15.248783 containerd[1530]: time="2025-05-14T09:26:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 09:26:15.261037 containerd[1530]: time="2025-05-14T09:26:15.260991655Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 09:26:15.312318 containerd[1530]: time="2025-05-14T09:26:15.312277422Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.778µs" May 14 09:26:15.313057 containerd[1530]: time="2025-05-14T09:26:15.313031977Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 09:26:15.313130 containerd[1530]: time="2025-05-14T09:26:15.313114972Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 09:26:15.313409 containerd[1530]: time="2025-05-14T09:26:15.313389748Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 09:26:15.313475 containerd[1530]: time="2025-05-14T09:26:15.313460601Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 09:26:15.313546 containerd[1530]: time="2025-05-14T09:26:15.313531544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 09:26:15.313659 containerd[1530]: time="2025-05-14T09:26:15.313640248Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 09:26:15.313721 containerd[1530]: time="2025-05-14T09:26:15.313707353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 09:26:15.313988 containerd[1530]: time="2025-05-14T09:26:15.313965858Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 09:26:15.314048 containerd[1530]: time="2025-05-14T09:26:15.314034587Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 09:26:15.314103 containerd[1530]: time="2025-05-14T09:26:15.314089330Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 09:26:15.314153 containerd[1530]: time="2025-05-14T09:26:15.314141618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 09:26:15.314288 containerd[1530]: time="2025-05-14T09:26:15.314271161Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 09:26:15.314538 containerd[1530]: time="2025-05-14T09:26:15.314519727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 09:26:15.314618 containerd[1530]: time="2025-05-14T09:26:15.314601601Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 09:26:15.314671 containerd[1530]: time="2025-05-14T09:26:15.314658818Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 09:26:15.314750 containerd[1530]: time="2025-05-14T09:26:15.314735592Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 09:26:15.315179 containerd[1530]: time="2025-05-14T09:26:15.315160960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 09:26:15.315285 containerd[1530]: time="2025-05-14T09:26:15.315269553Z" level=info msg="metadata content store policy set" policy=shared May 14 09:26:15.333784 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 14 09:26:15.333827 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 14 09:26:15.374790 kernel: Console: switching to colour dummy device 80x25 May 14 09:26:15.374848 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 09:26:15.374866 kernel: [drm] features: -context_init May 14 09:26:15.378747 systemd-vconsole-setup[1585]: KD_FONT_OP_SET failed, fonts will not be copied to tty2: Function not implemented May 14 09:26:15.378990 systemd-vconsole-setup[1585]: KD_FONT_OP_SET failed, fonts will not be copied to tty3: Function not implemented May 14 09:26:15.379021 systemd-vconsole-setup[1585]: KD_FONT_OP_SET failed, fonts will not be copied to tty4: Function not implemented May 14 09:26:15.379049 systemd-vconsole-setup[1585]: KD_FONT_OP_SET failed, fonts will not be copied to tty5: Function not implemented May 14 09:26:15.379070 systemd-vconsole-setup[1585]: KD_FONT_OP_SET failed, fonts will not be copied to tty6: Function not implemented May 14 09:26:15.380585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:26:15.381804 kernel: [drm] number of scanouts: 1 May 14 09:26:15.384926 kernel: [drm] number of cap sets: 0 May 14 09:26:15.389877 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 14 09:26:15.393840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 09:26:15.394113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:26:15.394971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:26:15.398733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 09:26:15.402491 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 09:26:15.481091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 09:26:15.566164 containerd[1530]: time="2025-05-14T09:26:15.566124001Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 09:26:15.566335 containerd[1530]: time="2025-05-14T09:26:15.566317103Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 09:26:15.566475 containerd[1530]: time="2025-05-14T09:26:15.566458538Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567791248Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567813710Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567826454Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567841783Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567901875Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567917675Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567932242Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567955356Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.567969402Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.568097011Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.568117440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.568132698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.568143969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 09:26:15.568787 containerd[1530]: time="2025-05-14T09:26:15.568158607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568169497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568182361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568193081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568205224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568215894Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568226354Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568287178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568300904Z" level=info msg="Start snapshots syncer" May 14 09:26:15.569117 containerd[1530]: time="2025-05-14T09:26:15.568320510Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 09:26:15.569316 containerd[1530]: time="2025-05-14T09:26:15.568549710Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 09:26:15.569316 containerd[1530]: time="2025-05-14T09:26:15.568603591Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 09:26:15.569442 containerd[1530]: time="2025-05-14T09:26:15.568670968Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 09:26:15.569649 containerd[1530]: time="2025-05-14T09:26:15.568759664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 09:26:15.569649 containerd[1530]: time="2025-05-14T09:26:15.569586024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 09:26:15.569649 containerd[1530]: time="2025-05-14T09:26:15.569603096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 09:26:15.569649 containerd[1530]: time="2025-05-14T09:26:15.569614297Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 09:26:15.569649 containerd[1530]: time="2025-05-14T09:26:15.569627432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 09:26:15.569832 containerd[1530]: time="2025-05-14T09:26:15.569813971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 09:26:15.571776 containerd[1530]: time="2025-05-14T09:26:15.569990392Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 09:26:15.571776 containerd[1530]: time="2025-05-14T09:26:15.570025558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 09:26:15.571776 containerd[1530]: time="2025-05-14T09:26:15.570038763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.571878133Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.571924329Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.571983671Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.571996004Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572006924Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572015761Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572025138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572054674Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572075232Z" level=info msg="runtime interface created" May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572081634Z" level=info msg="created NRI interface" May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572092435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572104417Z" level=info msg="Connect containerd service" May 14 09:26:15.572791 containerd[1530]: time="2025-05-14T09:26:15.572146837Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 09:26:15.573512 containerd[1530]: time="2025-05-14T09:26:15.573491348Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 09:26:15.634372 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 09:26:15.659001 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 09:26:15.662058 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 09:26:15.665571 systemd[1]: Started sshd@0-172.24.4.30:22-172.24.4.1:56540.service - OpenSSH per-connection server daemon (172.24.4.1:56540). May 14 09:26:15.672197 tar[1513]: linux-amd64/README.md May 14 09:26:15.688669 systemd[1]: issuegen.service: Deactivated successfully. May 14 09:26:15.688949 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 09:26:15.692695 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 09:26:15.693260 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 09:26:15.720606 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 09:26:15.724160 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 09:26:15.728300 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 09:26:15.728937 systemd[1]: Reached target getty.target - Login Prompts. May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.138941520Z" level=info msg="Start subscribing containerd event" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139031238Z" level=info msg="Start recovering state" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139292568Z" level=info msg="Start event monitor" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139330860Z" level=info msg="Start cni network conf syncer for default" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139354324Z" level=info msg="Start streaming server" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139383890Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139401483Z" level=info msg="runtime interface starting up..." May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139415669Z" level=info msg="starting plugins..." May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.139443361Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.140247028Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.140344211Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 09:26:16.143624 containerd[1530]: time="2025-05-14T09:26:16.141760577Z" level=info msg="containerd successfully booted in 0.893325s" May 14 09:26:16.140604 systemd[1]: Started containerd.service - containerd container runtime. May 14 09:26:16.416828 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:16.416987 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:16.575205 systemd-networkd[1447]: eth0: Gained IPv6LL May 14 09:26:16.576332 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 09:26:16.580596 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 09:26:16.582613 systemd[1]: Reached target network-online.target - Network is Online. May 14 09:26:16.587231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:26:16.589979 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 09:26:16.660868 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 09:26:17.081688 sshd[1611]: Accepted publickey for core from 172.24.4.1 port 56540 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:17.086968 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:17.130957 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 09:26:17.130988 systemd-logind[1506]: New session 1 of user core. May 14 09:26:17.137924 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 09:26:17.160206 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 09:26:17.164035 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 09:26:17.176602 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 09:26:17.182624 systemd-logind[1506]: New session c1 of user core. May 14 09:26:17.379460 systemd[1648]: Queued start job for default target default.target. May 14 09:26:17.384711 systemd[1648]: Created slice app.slice - User Application Slice. May 14 09:26:17.384895 systemd[1648]: Reached target paths.target - Paths. May 14 09:26:17.384941 systemd[1648]: Reached target timers.target - Timers. May 14 09:26:17.386365 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 09:26:17.426147 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 09:26:17.426546 systemd[1648]: Reached target sockets.target - Sockets. May 14 09:26:17.426740 systemd[1648]: Reached target basic.target - Basic System. May 14 09:26:17.426996 systemd[1648]: Reached target default.target - Main User Target. May 14 09:26:17.427007 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 09:26:17.427245 systemd[1648]: Startup finished in 229ms. May 14 09:26:17.441465 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 09:26:17.801861 systemd[1]: Started sshd@1-172.24.4.30:22-172.24.4.1:41356.service - OpenSSH per-connection server daemon (172.24.4.1:41356). May 14 09:26:18.440887 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:18.441007 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:18.969819 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 41356 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:18.972310 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:18.984441 systemd-logind[1506]: New session 2 of user core. May 14 09:26:19.000240 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 09:26:19.157955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:26:19.175660 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 09:26:19.611201 sshd[1663]: Connection closed by 172.24.4.1 port 41356 May 14 09:26:19.612121 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 14 09:26:19.632585 systemd[1]: sshd@1-172.24.4.30:22-172.24.4.1:41356.service: Deactivated successfully. May 14 09:26:19.636564 systemd[1]: session-2.scope: Deactivated successfully. May 14 09:26:19.639357 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. May 14 09:26:19.645656 systemd[1]: Started sshd@2-172.24.4.30:22-172.24.4.1:41366.service - OpenSSH per-connection server daemon (172.24.4.1:41366). May 14 09:26:19.651458 systemd-logind[1506]: Removed session 2. May 14 09:26:20.811270 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 09:26:20.813488 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 09:26:20.824214 systemd-logind[1506]: New session 3 of user core. May 14 09:26:20.838355 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 09:26:20.845371 systemd-logind[1506]: New session 4 of user core. May 14 09:26:20.852176 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 09:26:20.934275 kubelet[1669]: E0514 09:26:20.934233 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 09:26:20.938212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 09:26:20.938515 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 09:26:20.939722 systemd[1]: kubelet.service: Consumed 2.163s CPU time, 252.1M memory peak. May 14 09:26:21.329097 sshd[1679]: Accepted publickey for core from 172.24.4.1 port 41366 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:21.331822 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:21.343812 systemd-logind[1506]: New session 5 of user core. May 14 09:26:21.355343 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 09:26:21.970100 sshd[1709]: Connection closed by 172.24.4.1 port 41366 May 14 09:26:21.971096 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 14 09:26:21.978569 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. May 14 09:26:21.979656 systemd[1]: sshd@2-172.24.4.30:22-172.24.4.1:41366.service: Deactivated successfully. May 14 09:26:21.983357 systemd[1]: session-5.scope: Deactivated successfully. May 14 09:26:21.987519 systemd-logind[1506]: Removed session 5. May 14 09:26:22.463134 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:22.463267 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 14 09:26:22.477570 coreos-metadata[1563]: May 14 09:26:22.477 WARN failed to locate config-drive, using the metadata service API instead May 14 09:26:22.480659 coreos-metadata[1493]: May 14 09:26:22.480 WARN failed to locate config-drive, using the metadata service API instead May 14 09:26:22.524280 coreos-metadata[1563]: May 14 09:26:22.523 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 14 09:26:22.525622 coreos-metadata[1493]: May 14 09:26:22.525 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 14 09:26:22.690301 coreos-metadata[1563]: May 14 09:26:22.690 INFO Fetch successful May 14 09:26:22.690716 coreos-metadata[1563]: May 14 09:26:22.690 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 14 09:26:22.704269 coreos-metadata[1563]: May 14 09:26:22.704 INFO Fetch successful May 14 09:26:22.712556 coreos-metadata[1493]: May 14 09:26:22.712 INFO Fetch successful May 14 09:26:22.712709 coreos-metadata[1493]: May 14 09:26:22.712 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 14 09:26:22.712947 unknown[1563]: wrote ssh authorized keys file for user: core May 14 09:26:22.731365 coreos-metadata[1493]: May 14 09:26:22.731 INFO Fetch successful May 14 09:26:22.732108 coreos-metadata[1493]: May 14 09:26:22.731 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 14 09:26:22.746190 coreos-metadata[1493]: May 14 09:26:22.746 INFO Fetch successful May 14 09:26:22.746472 coreos-metadata[1493]: May 14 09:26:22.746 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 14 09:26:22.761942 coreos-metadata[1493]: May 14 09:26:22.761 INFO Fetch successful May 14 09:26:22.762526 coreos-metadata[1493]: May 14 09:26:22.762 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 14 09:26:22.763950 update-ssh-keys[1719]: Updated "/home/core/.ssh/authorized_keys" May 14 09:26:22.765084 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 09:26:22.771079 systemd[1]: Finished sshkeys.service. May 14 09:26:22.777687 coreos-metadata[1493]: May 14 09:26:22.777 INFO Fetch successful May 14 09:26:22.778243 coreos-metadata[1493]: May 14 09:26:22.778 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 14 09:26:22.789726 coreos-metadata[1493]: May 14 09:26:22.789 INFO Fetch successful May 14 09:26:22.839548 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 09:26:22.842466 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 09:26:22.843631 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 09:26:22.844434 systemd[1]: Startup finished in 3.735s (kernel) + 15.792s (initrd) + 11.423s (userspace) = 30.951s. May 14 09:26:31.189951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 09:26:31.193144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:26:31.652211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:26:31.676362 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 09:26:31.782489 kubelet[1735]: E0514 09:26:31.782404 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 09:26:31.790628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 09:26:31.791024 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 09:26:31.792065 systemd[1]: kubelet.service: Consumed 347ms CPU time, 102.1M memory peak. May 14 09:26:31.991060 systemd[1]: Started sshd@3-172.24.4.30:22-172.24.4.1:58958.service - OpenSSH per-connection server daemon (172.24.4.1:58958). May 14 09:26:33.417827 sshd[1743]: Accepted publickey for core from 172.24.4.1 port 58958 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:33.419869 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:33.428804 systemd-logind[1506]: New session 6 of user core. May 14 09:26:33.438039 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 09:26:34.058843 sshd[1745]: Connection closed by 172.24.4.1 port 58958 May 14 09:26:34.059273 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 14 09:26:34.075712 systemd[1]: sshd@3-172.24.4.30:22-172.24.4.1:58958.service: Deactivated successfully. May 14 09:26:34.079275 systemd[1]: session-6.scope: Deactivated successfully. May 14 09:26:34.080816 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. May 14 09:26:34.086376 systemd[1]: Started sshd@4-172.24.4.30:22-172.24.4.1:34960.service - OpenSSH per-connection server daemon (172.24.4.1:34960). May 14 09:26:34.089241 systemd-logind[1506]: Removed session 6. May 14 09:26:35.432554 sshd[1751]: Accepted publickey for core from 172.24.4.1 port 34960 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:35.435270 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:35.447873 systemd-logind[1506]: New session 7 of user core. May 14 09:26:35.455224 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 09:26:36.075365 sshd[1753]: Connection closed by 172.24.4.1 port 34960 May 14 09:26:36.075209 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 14 09:26:36.092105 systemd[1]: sshd@4-172.24.4.30:22-172.24.4.1:34960.service: Deactivated successfully. May 14 09:26:36.096305 systemd[1]: session-7.scope: Deactivated successfully. May 14 09:26:36.099569 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. May 14 09:26:36.104961 systemd[1]: Started sshd@5-172.24.4.30:22-172.24.4.1:34964.service - OpenSSH per-connection server daemon (172.24.4.1:34964). May 14 09:26:36.108675 systemd-logind[1506]: Removed session 7. May 14 09:26:37.558231 sshd[1759]: Accepted publickey for core from 172.24.4.1 port 34964 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:37.560591 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:37.572861 systemd-logind[1506]: New session 8 of user core. May 14 09:26:37.581095 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 09:26:38.420532 sshd[1761]: Connection closed by 172.24.4.1 port 34964 May 14 09:26:38.422339 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 14 09:26:38.436932 systemd[1]: sshd@5-172.24.4.30:22-172.24.4.1:34964.service: Deactivated successfully. May 14 09:26:38.440515 systemd[1]: session-8.scope: Deactivated successfully. May 14 09:26:38.443958 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. May 14 09:26:38.447210 systemd-logind[1506]: Removed session 8. May 14 09:26:38.449564 systemd[1]: Started sshd@6-172.24.4.30:22-172.24.4.1:34968.service - OpenSSH per-connection server daemon (172.24.4.1:34968). May 14 09:26:39.916924 sshd[1767]: Accepted publickey for core from 172.24.4.1 port 34968 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:39.919609 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:39.930720 systemd-logind[1506]: New session 9 of user core. May 14 09:26:39.945084 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 09:26:40.381826 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 09:26:40.382442 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 09:26:40.403760 sudo[1770]: pam_unix(sudo:session): session closed for user root May 14 09:26:40.664011 sshd[1769]: Connection closed by 172.24.4.1 port 34968 May 14 09:26:40.665684 sshd-session[1767]: pam_unix(sshd:session): session closed for user core May 14 09:26:40.682526 systemd[1]: sshd@6-172.24.4.30:22-172.24.4.1:34968.service: Deactivated successfully. May 14 09:26:40.686501 systemd[1]: session-9.scope: Deactivated successfully. May 14 09:26:40.690039 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. May 14 09:26:40.693908 systemd-logind[1506]: Removed session 9. May 14 09:26:40.697319 systemd[1]: Started sshd@7-172.24.4.30:22-172.24.4.1:34974.service - OpenSSH per-connection server daemon (172.24.4.1:34974). May 14 09:26:41.826085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 09:26:41.829717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:26:42.020735 sshd[1776]: Accepted publickey for core from 172.24.4.1 port 34974 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:42.023334 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:42.034025 systemd-logind[1506]: New session 10 of user core. May 14 09:26:42.043049 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 09:26:42.223478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:26:42.230215 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 09:26:42.356231 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 09:26:42.357184 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 09:26:42.368491 kubelet[1787]: E0514 09:26:42.368336 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 09:26:42.372685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 09:26:42.373024 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 09:26:42.373925 systemd[1]: kubelet.service: Consumed 275ms CPU time, 103.9M memory peak. May 14 09:26:42.378542 sudo[1794]: pam_unix(sudo:session): session closed for user root May 14 09:26:42.388627 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 09:26:42.389252 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 09:26:42.403528 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 09:26:42.458414 augenrules[1817]: No rules May 14 09:26:42.460000 systemd[1]: audit-rules.service: Deactivated successfully. May 14 09:26:42.460518 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 09:26:42.462344 sudo[1793]: pam_unix(sudo:session): session closed for user root May 14 09:26:42.606142 sshd[1781]: Connection closed by 172.24.4.1 port 34974 May 14 09:26:42.608075 sshd-session[1776]: pam_unix(sshd:session): session closed for user core May 14 09:26:42.622411 systemd[1]: sshd@7-172.24.4.30:22-172.24.4.1:34974.service: Deactivated successfully. May 14 09:26:42.625478 systemd[1]: session-10.scope: Deactivated successfully. May 14 09:26:42.627611 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. May 14 09:26:42.633428 systemd[1]: Started sshd@8-172.24.4.30:22-172.24.4.1:34978.service - OpenSSH per-connection server daemon (172.24.4.1:34978). May 14 09:26:42.635486 systemd-logind[1506]: Removed session 10. May 14 09:26:43.915889 sshd[1826]: Accepted publickey for core from 172.24.4.1 port 34978 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:26:43.918419 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:26:43.928393 systemd-logind[1506]: New session 11 of user core. May 14 09:26:43.938038 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 09:26:44.406519 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 09:26:44.407158 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 09:26:45.310182 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 09:26:45.325373 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 09:26:45.928330 dockerd[1847]: time="2025-05-14T09:26:45.928247115Z" level=info msg="Starting up" May 14 09:26:45.929899 dockerd[1847]: time="2025-05-14T09:26:45.929868476Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 09:26:46.015188 systemd[1]: var-lib-docker-metacopy\x2dcheck4248704641-merged.mount: Deactivated successfully. May 14 09:26:46.044733 dockerd[1847]: time="2025-05-14T09:26:46.044642444Z" level=info msg="Loading containers: start." May 14 09:26:46.069829 kernel: Initializing XFRM netlink socket May 14 09:26:46.420980 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 14 09:26:46.512063 systemd-networkd[1447]: docker0: Link UP May 14 09:26:46.522647 dockerd[1847]: time="2025-05-14T09:26:46.522523520Z" level=info msg="Loading containers: done." May 14 09:26:47.110987 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck544070953-merged.mount: Deactivated successfully. May 14 09:26:47.111292 systemd-resolved[1399]: Clock change detected. Flushing caches. May 14 09:26:47.112824 systemd-timesyncd[1415]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). May 14 09:26:47.112919 systemd-timesyncd[1415]: Initial clock synchronization to Wed 2025-05-14 09:26:47.110358 UTC. May 14 09:26:47.115341 dockerd[1847]: time="2025-05-14T09:26:47.114036968Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 09:26:47.115341 dockerd[1847]: time="2025-05-14T09:26:47.114183122Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 09:26:47.115341 dockerd[1847]: time="2025-05-14T09:26:47.114413073Z" level=info msg="Initializing buildkit" May 14 09:26:47.277827 dockerd[1847]: time="2025-05-14T09:26:47.277569123Z" level=info msg="Completed buildkit initialization" May 14 09:26:47.304727 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 09:26:47.305084 dockerd[1847]: time="2025-05-14T09:26:47.303576940Z" level=info msg="Daemon has completed initialization" May 14 09:26:47.305084 dockerd[1847]: time="2025-05-14T09:26:47.304870606Z" level=info msg="API listen on /run/docker.sock" May 14 09:26:49.962583 containerd[1530]: time="2025-05-14T09:26:49.962463712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 09:26:50.752741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29679352.mount: Deactivated successfully. May 14 09:26:52.520439 containerd[1530]: time="2025-05-14T09:26:52.520387717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:52.521426 containerd[1530]: time="2025-05-14T09:26:52.521396740Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 14 09:26:52.523340 containerd[1530]: time="2025-05-14T09:26:52.523118619Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:52.530646 containerd[1530]: time="2025-05-14T09:26:52.530587036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:52.532337 containerd[1530]: time="2025-05-14T09:26:52.532207846Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.569647874s" May 14 09:26:52.532337 containerd[1530]: time="2025-05-14T09:26:52.532296001Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 09:26:52.533274 containerd[1530]: time="2025-05-14T09:26:52.533169620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 09:26:53.157522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 09:26:53.164626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:26:54.008070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:26:54.022834 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 09:26:54.107010 kubelet[2111]: E0514 09:26:54.106923 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 09:26:54.108855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 09:26:54.109127 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 09:26:54.109966 systemd[1]: kubelet.service: Consumed 268ms CPU time, 103.1M memory peak. May 14 09:26:54.975866 containerd[1530]: time="2025-05-14T09:26:54.975825801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:54.977679 containerd[1530]: time="2025-05-14T09:26:54.977653799Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 14 09:26:54.979023 containerd[1530]: time="2025-05-14T09:26:54.978980297Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:54.982261 containerd[1530]: time="2025-05-14T09:26:54.982141867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:54.983241 containerd[1530]: time="2025-05-14T09:26:54.983193930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.449962113s" May 14 09:26:54.983288 containerd[1530]: time="2025-05-14T09:26:54.983244044Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 09:26:54.984012 containerd[1530]: time="2025-05-14T09:26:54.983808412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 09:26:56.726197 containerd[1530]: time="2025-05-14T09:26:56.726107033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:56.728284 containerd[1530]: time="2025-05-14T09:26:56.728241987Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 14 09:26:56.730242 containerd[1530]: time="2025-05-14T09:26:56.730148142Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:56.734318 containerd[1530]: time="2025-05-14T09:26:56.733091041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:56.734318 containerd[1530]: time="2025-05-14T09:26:56.734147653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.750310196s" May 14 09:26:56.734318 containerd[1530]: time="2025-05-14T09:26:56.734174974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 09:26:56.734790 containerd[1530]: time="2025-05-14T09:26:56.734763819Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 09:26:58.111846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229134142.mount: Deactivated successfully. May 14 09:26:58.928407 containerd[1530]: time="2025-05-14T09:26:58.928337686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:58.929609 containerd[1530]: time="2025-05-14T09:26:58.929578623Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 14 09:26:58.931122 containerd[1530]: time="2025-05-14T09:26:58.931091942Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:58.934094 containerd[1530]: time="2025-05-14T09:26:58.934036043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:26:58.934549 containerd[1530]: time="2025-05-14T09:26:58.934515923Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.199658509s" May 14 09:26:58.934604 containerd[1530]: time="2025-05-14T09:26:58.934548354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 09:26:58.935391 containerd[1530]: time="2025-05-14T09:26:58.935338666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 09:26:59.496126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977788268.mount: Deactivated successfully. May 14 09:27:00.452395 update_engine[1508]: I20250514 09:27:00.452345 1508 update_attempter.cc:509] Updating boot flags... May 14 09:27:01.006376 containerd[1530]: time="2025-05-14T09:27:01.006223580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:01.009159 containerd[1530]: time="2025-05-14T09:27:01.009002913Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 14 09:27:01.010705 containerd[1530]: time="2025-05-14T09:27:01.010598195Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:01.021288 containerd[1530]: time="2025-05-14T09:27:01.019172186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:01.022464 containerd[1530]: time="2025-05-14T09:27:01.022390562Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.086890182s" May 14 09:27:01.022670 containerd[1530]: time="2025-05-14T09:27:01.022630612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 09:27:01.024189 containerd[1530]: time="2025-05-14T09:27:01.024100388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 09:27:01.657156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295104236.mount: Deactivated successfully. May 14 09:27:01.674651 containerd[1530]: time="2025-05-14T09:27:01.674518788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 09:27:01.676413 containerd[1530]: time="2025-05-14T09:27:01.676324505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 14 09:27:01.678052 containerd[1530]: time="2025-05-14T09:27:01.677947819Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 09:27:01.683021 containerd[1530]: time="2025-05-14T09:27:01.682862917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 09:27:01.686029 containerd[1530]: time="2025-05-14T09:27:01.684877596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 660.704391ms" May 14 09:27:01.686029 containerd[1530]: time="2025-05-14T09:27:01.684957285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 09:27:01.686446 containerd[1530]: time="2025-05-14T09:27:01.686217739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 09:27:03.042614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795650110.mount: Deactivated successfully. May 14 09:27:04.156645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 09:27:04.160312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:27:04.758930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:04.766487 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 09:27:04.816578 kubelet[2261]: E0514 09:27:04.816455 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 09:27:04.821506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 09:27:04.822014 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 09:27:04.823158 systemd[1]: kubelet.service: Consumed 271ms CPU time, 103.7M memory peak. May 14 09:27:06.808572 containerd[1530]: time="2025-05-14T09:27:06.808505819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:06.810359 containerd[1530]: time="2025-05-14T09:27:06.810332275Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 14 09:27:06.810730 containerd[1530]: time="2025-05-14T09:27:06.810677292Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:06.813909 containerd[1530]: time="2025-05-14T09:27:06.813864650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:06.815903 containerd[1530]: time="2025-05-14T09:27:06.815267270Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.12896858s" May 14 09:27:06.815903 containerd[1530]: time="2025-05-14T09:27:06.815322534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 09:27:11.589592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:11.590664 systemd[1]: kubelet.service: Consumed 271ms CPU time, 103.7M memory peak. May 14 09:27:11.602608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:27:11.650751 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-11.scope)... May 14 09:27:11.650795 systemd[1]: Reloading... May 14 09:27:11.808287 zram_generator::config[2349]: No configuration found. May 14 09:27:12.210724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 09:27:12.388701 systemd[1]: Reloading finished in 737 ms. May 14 09:27:12.605702 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 09:27:12.605951 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 09:27:12.609683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:12.622894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:27:13.355468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:13.372079 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 09:27:13.475335 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 09:27:13.475335 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 09:27:13.475335 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 09:27:13.475335 kubelet[2411]: I0514 09:27:13.474355 2411 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 09:27:13.870324 kubelet[2411]: I0514 09:27:13.870114 2411 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 09:27:13.871353 kubelet[2411]: I0514 09:27:13.870817 2411 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 09:27:13.872684 kubelet[2411]: I0514 09:27:13.872614 2411 server.go:954] "Client rotation is on, will bootstrap in background" May 14 09:27:13.974903 kubelet[2411]: E0514 09:27:13.974820 2411 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:13.976647 kubelet[2411]: I0514 09:27:13.976506 2411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 09:27:14.020507 kubelet[2411]: I0514 09:27:14.020398 2411 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 09:27:14.029924 kubelet[2411]: I0514 09:27:14.029856 2411 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 09:27:14.031404 kubelet[2411]: I0514 09:27:14.031203 2411 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 09:27:14.032175 kubelet[2411]: I0514 09:27:14.031378 2411 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334-0-0-n-74e04034e7.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 09:27:14.033026 kubelet[2411]: I0514 09:27:14.032285 2411 topology_manager.go:138] "Creating topology manager with none policy" May 14 09:27:14.033026 kubelet[2411]: I0514 09:27:14.032329 2411 container_manager_linux.go:304] "Creating device plugin manager" May 14 09:27:14.035208 kubelet[2411]: I0514 09:27:14.035109 2411 state_mem.go:36] "Initialized new in-memory state store" May 14 09:27:14.049457 kubelet[2411]: I0514 09:27:14.049213 2411 kubelet.go:446] "Attempting to sync node with API server" May 14 09:27:14.050305 kubelet[2411]: I0514 09:27:14.049832 2411 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 09:27:14.050305 kubelet[2411]: I0514 09:27:14.050040 2411 kubelet.go:352] "Adding apiserver pod source" May 14 09:27:14.054829 kubelet[2411]: I0514 09:27:14.054782 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 09:27:14.063204 kubelet[2411]: W0514 09:27:14.062780 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-n-74e04034e7.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.30:6443: connect: connection refused May 14 09:27:14.063204 kubelet[2411]: E0514 09:27:14.062960 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-n-74e04034e7.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:14.065282 kubelet[2411]: I0514 09:27:14.064384 2411 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 09:27:14.065656 kubelet[2411]: I0514 09:27:14.065596 2411 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 09:27:14.065870 kubelet[2411]: W0514 09:27:14.065825 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 09:27:14.072100 kubelet[2411]: I0514 09:27:14.071397 2411 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 09:27:14.072100 kubelet[2411]: I0514 09:27:14.071519 2411 server.go:1287] "Started kubelet" May 14 09:27:14.072100 kubelet[2411]: W0514 09:27:14.071633 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.30:6443: connect: connection refused May 14 09:27:14.072100 kubelet[2411]: E0514 09:27:14.071771 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:14.093132 kubelet[2411]: I0514 09:27:14.092477 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 09:27:14.100370 kubelet[2411]: E0514 09:27:14.096094 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.30:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334-0-0-n-74e04034e7.novalocal.183f5aa26b1f3b12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334-0-0-n-74e04034e7.novalocal,UID:ci-4334-0-0-n-74e04034e7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334-0-0-n-74e04034e7.novalocal,},FirstTimestamp:2025-05-14 09:27:14.071452434 +0000 UTC m=+0.672881741,LastTimestamp:2025-05-14 09:27:14.071452434 +0000 UTC m=+0.672881741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334-0-0-n-74e04034e7.novalocal,}" May 14 09:27:14.105301 kubelet[2411]: I0514 09:27:14.104958 2411 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 09:27:14.107861 kubelet[2411]: I0514 09:27:14.107811 2411 server.go:490] "Adding debug handlers to kubelet server" May 14 09:27:14.108313 kubelet[2411]: I0514 09:27:14.108157 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 09:27:14.109001 kubelet[2411]: I0514 09:27:14.108949 2411 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 09:27:14.109333 kubelet[2411]: I0514 09:27:14.109294 2411 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 09:27:14.109657 kubelet[2411]: E0514 09:27:14.109612 2411 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" May 14 09:27:14.111006 kubelet[2411]: I0514 09:27:14.110944 2411 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 09:27:14.111181 kubelet[2411]: I0514 09:27:14.111037 2411 reconciler.go:26] "Reconciler: start to sync state" May 14 09:27:14.114283 kubelet[2411]: I0514 09:27:14.113632 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 09:27:14.117344 kubelet[2411]: W0514 09:27:14.117170 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.30:6443: connect: connection refused May 14 09:27:14.117990 kubelet[2411]: E0514 09:27:14.117961 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:14.118559 kubelet[2411]: E0514 09:27:14.117831 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-n-74e04034e7.novalocal?timeout=10s\": dial tcp 172.24.4.30:6443: connect: connection refused" interval="200ms" May 14 09:27:14.118776 kubelet[2411]: I0514 09:27:14.118407 2411 factory.go:221] Registration of the systemd container factory successfully May 14 09:27:14.119139 kubelet[2411]: I0514 09:27:14.119098 2411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 09:27:14.119676 kubelet[2411]: E0514 09:27:14.119643 2411 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 09:27:14.123761 kubelet[2411]: I0514 09:27:14.123660 2411 factory.go:221] Registration of the containerd container factory successfully May 14 09:27:14.147282 kubelet[2411]: I0514 09:27:14.147204 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 09:27:14.150416 kubelet[2411]: I0514 09:27:14.150330 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 09:27:14.150416 kubelet[2411]: I0514 09:27:14.150417 2411 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 09:27:14.151349 kubelet[2411]: I0514 09:27:14.151304 2411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 09:27:14.151349 kubelet[2411]: I0514 09:27:14.151321 2411 kubelet.go:2388] "Starting kubelet main sync loop" May 14 09:27:14.151433 kubelet[2411]: E0514 09:27:14.151398 2411 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 09:27:14.151571 kubelet[2411]: I0514 09:27:14.151551 2411 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 09:27:14.151645 kubelet[2411]: I0514 09:27:14.151634 2411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 09:27:14.151721 kubelet[2411]: I0514 09:27:14.151708 2411 state_mem.go:36] "Initialized new in-memory state store" May 14 09:27:14.152937 kubelet[2411]: W0514 09:27:14.152819 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.30:6443: connect: connection refused May 14 09:27:14.152937 kubelet[2411]: E0514 09:27:14.152912 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:14.158321 kubelet[2411]: I0514 09:27:14.157972 2411 policy_none.go:49] "None policy: Start" May 14 09:27:14.158321 kubelet[2411]: I0514 09:27:14.158069 2411 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 09:27:14.158321 kubelet[2411]: I0514 09:27:14.158133 2411 state_mem.go:35] "Initializing new in-memory state store" May 14 09:27:14.171981 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 09:27:14.189258 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 09:27:14.194472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 09:27:14.205957 kubelet[2411]: I0514 09:27:14.205907 2411 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 09:27:14.206428 kubelet[2411]: I0514 09:27:14.206398 2411 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 09:27:14.206614 kubelet[2411]: I0514 09:27:14.206467 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 09:27:14.208147 kubelet[2411]: I0514 09:27:14.207859 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 09:27:14.211442 kubelet[2411]: E0514 09:27:14.211406 2411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 09:27:14.211573 kubelet[2411]: E0514 09:27:14.211507 2411 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" May 14 09:27:14.274770 systemd[1]: Created slice kubepods-burstable-pod761ad46df4743d61afb136f544459e46.slice - libcontainer container kubepods-burstable-pod761ad46df4743d61afb136f544459e46.slice. May 14 09:27:14.310852 kubelet[2411]: E0514 09:27:14.310376 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.315730 kubelet[2411]: I0514 09:27:14.315510 2411 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.317570 kubelet[2411]: E0514 09:27:14.317404 2411 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.30:6443/api/v1/nodes\": dial tcp 172.24.4.30:6443: connect: connection refused" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.320990 kubelet[2411]: E0514 09:27:14.320895 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-n-74e04034e7.novalocal?timeout=10s\": dial tcp 172.24.4.30:6443: connect: connection refused" interval="400ms" May 14 09:27:14.327598 systemd[1]: Created slice kubepods-burstable-podfad45aefc5241562da84ea2019ce24b7.slice - libcontainer container kubepods-burstable-podfad45aefc5241562da84ea2019ce24b7.slice. May 14 09:27:14.333292 kubelet[2411]: E0514 09:27:14.333030 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.337668 systemd[1]: Created slice kubepods-burstable-podb595899f18e6eebe1cc0576551399760.slice - libcontainer container kubepods-burstable-podb595899f18e6eebe1cc0576551399760.slice. May 14 09:27:14.341367 kubelet[2411]: E0514 09:27:14.341282 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.413288 kubelet[2411]: I0514 09:27:14.412963 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-ca-certs\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.413288 kubelet[2411]: I0514 09:27:14.413048 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-kubeconfig\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.413288 kubelet[2411]: I0514 09:27:14.413102 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-ca-certs\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.413288 kubelet[2411]: I0514 09:27:14.413142 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-k8s-certs\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.413288 kubelet[2411]: I0514 09:27:14.413193 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.414157 kubelet[2411]: I0514 09:27:14.414025 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-flexvolume-dir\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.414635 kubelet[2411]: I0514 09:27:14.414460 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-k8s-certs\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.415133 kubelet[2411]: I0514 09:27:14.415013 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.415613 kubelet[2411]: I0514 09:27:14.415456 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b595899f18e6eebe1cc0576551399760-kubeconfig\") pod \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"b595899f18e6eebe1cc0576551399760\") " pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.522547 kubelet[2411]: I0514 09:27:14.522265 2411 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.524187 kubelet[2411]: E0514 09:27:14.524132 2411 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.30:6443/api/v1/nodes\": dial tcp 172.24.4.30:6443: connect: connection refused" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.618006 containerd[1530]: time="2025-05-14T09:27:14.617661018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal,Uid:761ad46df4743d61afb136f544459e46,Namespace:kube-system,Attempt:0,}" May 14 09:27:14.634989 containerd[1530]: time="2025-05-14T09:27:14.634892015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal,Uid:fad45aefc5241562da84ea2019ce24b7,Namespace:kube-system,Attempt:0,}" May 14 09:27:14.645350 containerd[1530]: time="2025-05-14T09:27:14.645158039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal,Uid:b595899f18e6eebe1cc0576551399760,Namespace:kube-system,Attempt:0,}" May 14 09:27:14.713363 containerd[1530]: time="2025-05-14T09:27:14.712995377Z" level=info msg="connecting to shim 7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee" address="unix:///run/containerd/s/262db6371d1c45553f7c71173c79f4d517a6ecd49e3e031c7ce29d286bf7eed6" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:14.724262 kubelet[2411]: E0514 09:27:14.723913 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-n-74e04034e7.novalocal?timeout=10s\": dial tcp 172.24.4.30:6443: connect: connection refused" interval="800ms" May 14 09:27:14.778507 systemd[1]: Started cri-containerd-7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee.scope - libcontainer container 7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee. May 14 09:27:14.803138 containerd[1530]: time="2025-05-14T09:27:14.803079440Z" level=info msg="connecting to shim c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805" address="unix:///run/containerd/s/7d9ce882f291b7b791b948f6df5309fbaca9084b2e4a877d5a5aa92d5e7c35c8" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:14.853317 containerd[1530]: time="2025-05-14T09:27:14.852844655Z" level=info msg="connecting to shim d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1" address="unix:///run/containerd/s/9dbc4e60534b5661a3c7a404027db8a8a7cb7b748af6ad1be44be319402f588a" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:14.879526 systemd[1]: Started cri-containerd-c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805.scope - libcontainer container c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805. May 14 09:27:14.920613 containerd[1530]: time="2025-05-14T09:27:14.920506745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal,Uid:761ad46df4743d61afb136f544459e46,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee\"" May 14 09:27:14.934915 kubelet[2411]: I0514 09:27:14.934864 2411 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.935554 containerd[1530]: time="2025-05-14T09:27:14.935513610Z" level=info msg="CreateContainer within sandbox \"7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 09:27:14.936418 kubelet[2411]: E0514 09:27:14.936255 2411 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.30:6443/api/v1/nodes\": dial tcp 172.24.4.30:6443: connect: connection refused" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:14.938468 systemd[1]: Started cri-containerd-d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1.scope - libcontainer container d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1. May 14 09:27:14.958127 containerd[1530]: time="2025-05-14T09:27:14.958027335Z" level=info msg="Container 4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:14.985872 containerd[1530]: time="2025-05-14T09:27:14.984841505Z" level=info msg="CreateContainer within sandbox \"7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe\"" May 14 09:27:14.986371 containerd[1530]: time="2025-05-14T09:27:14.986334605Z" level=info msg="StartContainer for \"4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe\"" May 14 09:27:14.991483 containerd[1530]: time="2025-05-14T09:27:14.991272616Z" level=info msg="connecting to shim 4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe" address="unix:///run/containerd/s/262db6371d1c45553f7c71173c79f4d517a6ecd49e3e031c7ce29d286bf7eed6" protocol=ttrpc version=3 May 14 09:27:15.026430 containerd[1530]: time="2025-05-14T09:27:15.026278841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal,Uid:b595899f18e6eebe1cc0576551399760,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805\"" May 14 09:27:15.041248 containerd[1530]: time="2025-05-14T09:27:15.041174758Z" level=info msg="CreateContainer within sandbox \"c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 09:27:15.043616 systemd[1]: Started cri-containerd-4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe.scope - libcontainer container 4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe. May 14 09:27:15.062290 containerd[1530]: time="2025-05-14T09:27:15.061627818Z" level=info msg="Container 25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:15.068180 containerd[1530]: time="2025-05-14T09:27:15.068145702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal,Uid:fad45aefc5241562da84ea2019ce24b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1\"" May 14 09:27:15.072568 containerd[1530]: time="2025-05-14T09:27:15.072503956Z" level=info msg="CreateContainer within sandbox \"d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 09:27:15.085101 containerd[1530]: time="2025-05-14T09:27:15.085029798Z" level=info msg="CreateContainer within sandbox \"c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a\"" May 14 09:27:15.086845 containerd[1530]: time="2025-05-14T09:27:15.086784028Z" level=info msg="StartContainer for \"25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a\"" May 14 09:27:15.090924 containerd[1530]: time="2025-05-14T09:27:15.090853931Z" level=info msg="connecting to shim 25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a" address="unix:///run/containerd/s/7d9ce882f291b7b791b948f6df5309fbaca9084b2e4a877d5a5aa92d5e7c35c8" protocol=ttrpc version=3 May 14 09:27:15.094730 containerd[1530]: time="2025-05-14T09:27:15.094625996Z" level=info msg="Container baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:15.114294 containerd[1530]: time="2025-05-14T09:27:15.114150624Z" level=info msg="CreateContainer within sandbox \"d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3\"" May 14 09:27:15.121287 containerd[1530]: time="2025-05-14T09:27:15.120810284Z" level=info msg="StartContainer for \"baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3\"" May 14 09:27:15.127400 containerd[1530]: time="2025-05-14T09:27:15.127320735Z" level=info msg="connecting to shim baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3" address="unix:///run/containerd/s/9dbc4e60534b5661a3c7a404027db8a8a7cb7b748af6ad1be44be319402f588a" protocol=ttrpc version=3 May 14 09:27:15.128670 systemd[1]: Started cri-containerd-25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a.scope - libcontainer container 25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a. May 14 09:27:15.145406 containerd[1530]: time="2025-05-14T09:27:15.144859378Z" level=info msg="StartContainer for \"4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe\" returns successfully" May 14 09:27:15.171960 kubelet[2411]: W0514 09:27:15.171875 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.30:6443: connect: connection refused May 14 09:27:15.173057 kubelet[2411]: E0514 09:27:15.172188 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.30:6443: connect: connection refused" logger="UnhandledError" May 14 09:27:15.175502 systemd[1]: Started cri-containerd-baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3.scope - libcontainer container baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3. May 14 09:27:15.196617 kubelet[2411]: E0514 09:27:15.196505 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:15.238205 kubelet[2411]: E0514 09:27:15.237669 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.30:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334-0-0-n-74e04034e7.novalocal.183f5aa26b1f3b12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334-0-0-n-74e04034e7.novalocal,UID:ci-4334-0-0-n-74e04034e7.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334-0-0-n-74e04034e7.novalocal,},FirstTimestamp:2025-05-14 09:27:14.071452434 +0000 UTC m=+0.672881741,LastTimestamp:2025-05-14 09:27:14.071452434 +0000 UTC m=+0.672881741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334-0-0-n-74e04034e7.novalocal,}" May 14 09:27:15.271083 containerd[1530]: time="2025-05-14T09:27:15.270965260Z" level=info msg="StartContainer for \"25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a\" returns successfully" May 14 09:27:15.332001 containerd[1530]: time="2025-05-14T09:27:15.331954245Z" level=info msg="StartContainer for \"baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3\" returns successfully" May 14 09:27:15.740887 kubelet[2411]: I0514 09:27:15.740853 2411 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:16.200543 kubelet[2411]: E0514 09:27:16.200417 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:16.207165 kubelet[2411]: E0514 09:27:16.204158 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:16.208723 kubelet[2411]: E0514 09:27:16.208645 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.209753 kubelet[2411]: E0514 09:27:17.209666 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.210984 kubelet[2411]: E0514 09:27:17.210511 2411 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.616832 kubelet[2411]: E0514 09:27:17.616507 2411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.754657 kubelet[2411]: I0514 09:27:17.754606 2411 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.754657 kubelet[2411]: E0514 09:27:17.754648 2411 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4334-0-0-n-74e04034e7.novalocal\": node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" May 14 09:27:17.811109 kubelet[2411]: I0514 09:27:17.811048 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.827255 kubelet[2411]: E0514 09:27:17.827157 2411 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.827255 kubelet[2411]: I0514 09:27:17.827220 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.829017 kubelet[2411]: E0514 09:27:17.828989 2411 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.829017 kubelet[2411]: I0514 09:27:17.829012 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:17.834064 kubelet[2411]: E0514 09:27:17.833992 2411 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:18.066706 kubelet[2411]: I0514 09:27:18.066581 2411 apiserver.go:52] "Watching apiserver" May 14 09:27:18.112193 kubelet[2411]: I0514 09:27:18.112105 2411 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 09:27:18.220036 kubelet[2411]: I0514 09:27:18.219830 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:18.223915 kubelet[2411]: I0514 09:27:18.222390 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:18.230692 kubelet[2411]: E0514 09:27:18.230449 2411 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:18.232695 kubelet[2411]: E0514 09:27:18.232545 2411 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:20.368573 kubelet[2411]: I0514 09:27:20.367840 2411 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:20.491318 kubelet[2411]: W0514 09:27:20.490699 2411 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:20.538945 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit session-11.scope)... May 14 09:27:20.539070 systemd[1]: Reloading... May 14 09:27:20.665278 zram_generator::config[2731]: No configuration found. May 14 09:27:20.827224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 09:27:20.996271 systemd[1]: Reloading finished in 455 ms. May 14 09:27:21.026734 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:27:21.050447 systemd[1]: kubelet.service: Deactivated successfully. May 14 09:27:21.050816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:21.050943 systemd[1]: kubelet.service: Consumed 1.622s CPU time, 124.6M memory peak. May 14 09:27:21.055021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 09:27:21.638529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 09:27:21.651644 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 09:27:21.749018 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 09:27:21.750258 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 09:27:21.750258 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 09:27:21.750258 kubelet[2795]: I0514 09:27:21.749584 2795 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 09:27:21.760501 kubelet[2795]: I0514 09:27:21.760475 2795 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 09:27:21.760633 kubelet[2795]: I0514 09:27:21.760621 2795 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 09:27:21.761143 kubelet[2795]: I0514 09:27:21.761126 2795 server.go:954] "Client rotation is on, will bootstrap in background" May 14 09:27:21.766673 kubelet[2795]: I0514 09:27:21.766629 2795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 09:27:21.769841 kubelet[2795]: I0514 09:27:21.769624 2795 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 09:27:21.776443 kubelet[2795]: I0514 09:27:21.776426 2795 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 09:27:21.781156 kubelet[2795]: I0514 09:27:21.781126 2795 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 09:27:21.781861 kubelet[2795]: I0514 09:27:21.781796 2795 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 09:27:21.782332 kubelet[2795]: I0514 09:27:21.781968 2795 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334-0-0-n-74e04034e7.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 09:27:21.783003 kubelet[2795]: I0514 09:27:21.782776 2795 topology_manager.go:138] "Creating topology manager with none policy" May 14 09:27:21.783003 kubelet[2795]: I0514 09:27:21.782803 2795 container_manager_linux.go:304] "Creating device plugin manager" May 14 09:27:21.783003 kubelet[2795]: I0514 09:27:21.782937 2795 state_mem.go:36] "Initialized new in-memory state store" May 14 09:27:21.783657 kubelet[2795]: I0514 09:27:21.783474 2795 kubelet.go:446] "Attempting to sync node with API server" May 14 09:27:21.783657 kubelet[2795]: I0514 09:27:21.783511 2795 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 09:27:21.783657 kubelet[2795]: I0514 09:27:21.783558 2795 kubelet.go:352] "Adding apiserver pod source" May 14 09:27:21.783657 kubelet[2795]: I0514 09:27:21.783596 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 09:27:21.786285 kubelet[2795]: I0514 09:27:21.786253 2795 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 09:27:21.786776 kubelet[2795]: I0514 09:27:21.786750 2795 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 09:27:21.788715 kubelet[2795]: I0514 09:27:21.788686 2795 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 09:27:21.788791 kubelet[2795]: I0514 09:27:21.788758 2795 server.go:1287] "Started kubelet" May 14 09:27:21.795273 kubelet[2795]: I0514 09:27:21.795137 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 09:27:21.796265 kubelet[2795]: I0514 09:27:21.796221 2795 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 09:27:21.799634 kubelet[2795]: I0514 09:27:21.799605 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 09:27:21.813812 kubelet[2795]: I0514 09:27:21.813754 2795 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 09:27:21.816265 kubelet[2795]: I0514 09:27:21.814955 2795 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 09:27:21.818371 kubelet[2795]: I0514 09:27:21.818333 2795 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 09:27:21.819087 kubelet[2795]: E0514 09:27:21.819039 2795 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4334-0-0-n-74e04034e7.novalocal\" not found" May 14 09:27:21.824432 kubelet[2795]: I0514 09:27:21.824391 2795 server.go:490] "Adding debug handlers to kubelet server" May 14 09:27:21.826128 kubelet[2795]: I0514 09:27:21.826107 2795 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 09:27:21.826438 kubelet[2795]: I0514 09:27:21.826422 2795 reconciler.go:26] "Reconciler: start to sync state" May 14 09:27:21.830754 kubelet[2795]: I0514 09:27:21.830691 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 09:27:21.834271 kubelet[2795]: I0514 09:27:21.834223 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 09:27:21.834456 kubelet[2795]: I0514 09:27:21.834442 2795 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 09:27:21.834805 kubelet[2795]: I0514 09:27:21.834784 2795 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 09:27:21.834931 kubelet[2795]: I0514 09:27:21.834916 2795 kubelet.go:2388] "Starting kubelet main sync loop" May 14 09:27:21.835064 kubelet[2795]: E0514 09:27:21.835041 2795 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 09:27:21.840266 kubelet[2795]: I0514 09:27:21.839036 2795 factory.go:221] Registration of the systemd container factory successfully May 14 09:27:21.840266 kubelet[2795]: I0514 09:27:21.839148 2795 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 09:27:21.853463 kubelet[2795]: E0514 09:27:21.853432 2795 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 09:27:21.854219 kubelet[2795]: I0514 09:27:21.854192 2795 factory.go:221] Registration of the containerd container factory successfully May 14 09:27:21.935046 kubelet[2795]: I0514 09:27:21.934997 2795 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 09:27:21.935046 kubelet[2795]: I0514 09:27:21.935021 2795 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 09:27:21.935046 kubelet[2795]: I0514 09:27:21.935054 2795 state_mem.go:36] "Initialized new in-memory state store" May 14 09:27:21.935308 kubelet[2795]: I0514 09:27:21.935297 2795 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 09:27:21.935355 kubelet[2795]: I0514 09:27:21.935318 2795 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 09:27:21.935415 kubelet[2795]: I0514 09:27:21.935361 2795 policy_none.go:49] "None policy: Start" May 14 09:27:21.935415 kubelet[2795]: I0514 09:27:21.935401 2795 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 09:27:21.935495 kubelet[2795]: I0514 09:27:21.935431 2795 state_mem.go:35] "Initializing new in-memory state store" May 14 09:27:21.935588 kubelet[2795]: I0514 09:27:21.935563 2795 state_mem.go:75] "Updated machine memory state" May 14 09:27:21.936485 kubelet[2795]: E0514 09:27:21.936446 2795 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 09:27:21.944528 kubelet[2795]: I0514 09:27:21.944282 2795 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 09:27:21.945556 kubelet[2795]: I0514 09:27:21.945529 2795 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 09:27:21.946378 kubelet[2795]: I0514 09:27:21.946317 2795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 09:27:21.946826 kubelet[2795]: I0514 09:27:21.946667 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 09:27:21.950956 kubelet[2795]: E0514 09:27:21.950095 2795 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 09:27:22.061286 kubelet[2795]: I0514 09:27:22.060007 2795 kubelet_node_status.go:76] "Attempting to register node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.079316 kubelet[2795]: I0514 09:27:22.079227 2795 kubelet_node_status.go:125] "Node was previously registered" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.079450 kubelet[2795]: I0514 09:27:22.079342 2795 kubelet_node_status.go:79] "Successfully registered node" node="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.138211 kubelet[2795]: I0514 09:27:22.138105 2795 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.140354 kubelet[2795]: I0514 09:27:22.138177 2795 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.140354 kubelet[2795]: I0514 09:27:22.138618 2795 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.148541 kubelet[2795]: W0514 09:27:22.148432 2795 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:22.152395 kubelet[2795]: W0514 09:27:22.152267 2795 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:22.152880 kubelet[2795]: W0514 09:27:22.152831 2795 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:22.152965 kubelet[2795]: E0514 09:27:22.152902 2795 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.244187 kubelet[2795]: I0514 09:27:22.243848 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-k8s-certs\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.247277 kubelet[2795]: I0514 09:27:22.244778 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-kubeconfig\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.247277 kubelet[2795]: I0514 09:27:22.247018 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.248261 kubelet[2795]: I0514 09:27:22.247749 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-ca-certs\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.248655 kubelet[2795]: I0514 09:27:22.248189 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.249278 kubelet[2795]: I0514 09:27:22.248964 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-ca-certs\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.251807 kubelet[2795]: I0514 09:27:22.251653 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fad45aefc5241562da84ea2019ce24b7-flexvolume-dir\") pod \"kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"fad45aefc5241562da84ea2019ce24b7\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.252192 kubelet[2795]: I0514 09:27:22.252098 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/761ad46df4743d61afb136f544459e46-k8s-certs\") pod \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"761ad46df4743d61afb136f544459e46\") " pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.254066 kubelet[2795]: I0514 09:27:22.252452 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b595899f18e6eebe1cc0576551399760-kubeconfig\") pod \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" (UID: \"b595899f18e6eebe1cc0576551399760\") " pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.792480 kubelet[2795]: I0514 09:27:22.792375 2795 apiserver.go:52] "Watching apiserver" May 14 09:27:22.826978 kubelet[2795]: I0514 09:27:22.826854 2795 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 09:27:22.903597 kubelet[2795]: I0514 09:27:22.901682 2795 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.905744 kubelet[2795]: I0514 09:27:22.905668 2795 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.926849 kubelet[2795]: I0514 09:27:22.926769 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" podStartSLOduration=0.926730304 podStartE2EDuration="926.730304ms" podCreationTimestamp="2025-05-14 09:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:27:22.926471864 +0000 UTC m=+1.254371222" watchObservedRunningTime="2025-05-14 09:27:22.926730304 +0000 UTC m=+1.254629652" May 14 09:27:22.927908 kubelet[2795]: W0514 09:27:22.927882 2795 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:22.928191 kubelet[2795]: E0514 09:27:22.927949 2795 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.937317 kubelet[2795]: W0514 09:27:22.936104 2795 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 09:27:22.937317 kubelet[2795]: E0514 09:27:22.936173 2795 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:27:22.967926 kubelet[2795]: I0514 09:27:22.967793 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334-0-0-n-74e04034e7.novalocal" podStartSLOduration=2.967768768 podStartE2EDuration="2.967768768s" podCreationTimestamp="2025-05-14 09:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:27:22.94915541 +0000 UTC m=+1.277054768" watchObservedRunningTime="2025-05-14 09:27:22.967768768 +0000 UTC m=+1.295668126" May 14 09:27:22.984926 kubelet[2795]: I0514 09:27:22.984489 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334-0-0-n-74e04034e7.novalocal" podStartSLOduration=0.984467023 podStartE2EDuration="984.467023ms" podCreationTimestamp="2025-05-14 09:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:27:22.969116293 +0000 UTC m=+1.297015641" watchObservedRunningTime="2025-05-14 09:27:22.984467023 +0000 UTC m=+1.312366382" May 14 09:27:24.848422 kubelet[2795]: I0514 09:27:24.848345 2795 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 09:27:24.849310 containerd[1530]: time="2025-05-14T09:27:24.849139574Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 09:27:24.850527 kubelet[2795]: I0514 09:27:24.850503 2795 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 09:27:25.409386 systemd[1]: Created slice kubepods-besteffort-podb4ae888c_2e92_4a09_b182_f18bd074121e.slice - libcontainer container kubepods-besteffort-podb4ae888c_2e92_4a09_b182_f18bd074121e.slice. May 14 09:27:25.482066 kubelet[2795]: I0514 09:27:25.482025 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4ae888c-2e92-4a09-b182-f18bd074121e-kube-proxy\") pod \"kube-proxy-zgv5s\" (UID: \"b4ae888c-2e92-4a09-b182-f18bd074121e\") " pod="kube-system/kube-proxy-zgv5s" May 14 09:27:25.482384 kubelet[2795]: I0514 09:27:25.482366 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4ae888c-2e92-4a09-b182-f18bd074121e-lib-modules\") pod \"kube-proxy-zgv5s\" (UID: \"b4ae888c-2e92-4a09-b182-f18bd074121e\") " pod="kube-system/kube-proxy-zgv5s" May 14 09:27:25.482621 kubelet[2795]: I0514 09:27:25.482546 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dv88\" (UniqueName: \"kubernetes.io/projected/b4ae888c-2e92-4a09-b182-f18bd074121e-kube-api-access-6dv88\") pod \"kube-proxy-zgv5s\" (UID: \"b4ae888c-2e92-4a09-b182-f18bd074121e\") " pod="kube-system/kube-proxy-zgv5s" May 14 09:27:25.482621 kubelet[2795]: I0514 09:27:25.482577 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4ae888c-2e92-4a09-b182-f18bd074121e-xtables-lock\") pod \"kube-proxy-zgv5s\" (UID: \"b4ae888c-2e92-4a09-b182-f18bd074121e\") " pod="kube-system/kube-proxy-zgv5s" May 14 09:27:25.721014 containerd[1530]: time="2025-05-14T09:27:25.720644847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgv5s,Uid:b4ae888c-2e92-4a09-b182-f18bd074121e,Namespace:kube-system,Attempt:0,}" May 14 09:27:26.092570 containerd[1530]: time="2025-05-14T09:27:26.092367024Z" level=info msg="connecting to shim 411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab" address="unix:///run/containerd/s/ce8429e08591fd9f708cc5a39c4bf2ecf4839978ed24cf04f8c9d30ca78065d1" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:26.160511 systemd[1]: Started cri-containerd-411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab.scope - libcontainer container 411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab. May 14 09:27:26.174867 systemd[1]: Created slice kubepods-besteffort-poda37c9896_874d_4f95_aca0_71ce3a688347.slice - libcontainer container kubepods-besteffort-poda37c9896_874d_4f95_aca0_71ce3a688347.slice. May 14 09:27:26.240763 containerd[1530]: time="2025-05-14T09:27:26.240680907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgv5s,Uid:b4ae888c-2e92-4a09-b182-f18bd074121e,Namespace:kube-system,Attempt:0,} returns sandbox id \"411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab\"" May 14 09:27:26.257275 containerd[1530]: time="2025-05-14T09:27:26.254639592Z" level=info msg="CreateContainer within sandbox \"411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 09:27:26.284517 containerd[1530]: time="2025-05-14T09:27:26.284449210Z" level=info msg="Container e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:26.291932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717464128.mount: Deactivated successfully. May 14 09:27:26.293283 kubelet[2795]: I0514 09:27:26.292357 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46b4c\" (UniqueName: \"kubernetes.io/projected/a37c9896-874d-4f95-aca0-71ce3a688347-kube-api-access-46b4c\") pod \"tigera-operator-789496d6f5-799xj\" (UID: \"a37c9896-874d-4f95-aca0-71ce3a688347\") " pod="tigera-operator/tigera-operator-789496d6f5-799xj" May 14 09:27:26.293283 kubelet[2795]: I0514 09:27:26.292409 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a37c9896-874d-4f95-aca0-71ce3a688347-var-lib-calico\") pod \"tigera-operator-789496d6f5-799xj\" (UID: \"a37c9896-874d-4f95-aca0-71ce3a688347\") " pod="tigera-operator/tigera-operator-789496d6f5-799xj" May 14 09:27:26.327253 containerd[1530]: time="2025-05-14T09:27:26.326277694Z" level=info msg="CreateContainer within sandbox \"411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2\"" May 14 09:27:26.328554 containerd[1530]: time="2025-05-14T09:27:26.328261767Z" level=info msg="StartContainer for \"e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2\"" May 14 09:27:26.332700 containerd[1530]: time="2025-05-14T09:27:26.331944453Z" level=info msg="connecting to shim e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2" address="unix:///run/containerd/s/ce8429e08591fd9f708cc5a39c4bf2ecf4839978ed24cf04f8c9d30ca78065d1" protocol=ttrpc version=3 May 14 09:27:26.362435 systemd[1]: Started cri-containerd-e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2.scope - libcontainer container e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2. May 14 09:27:26.429504 containerd[1530]: time="2025-05-14T09:27:26.429444856Z" level=info msg="StartContainer for \"e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2\" returns successfully" May 14 09:27:26.479986 containerd[1530]: time="2025-05-14T09:27:26.479923302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-799xj,Uid:a37c9896-874d-4f95-aca0-71ce3a688347,Namespace:tigera-operator,Attempt:0,}" May 14 09:27:26.508700 containerd[1530]: time="2025-05-14T09:27:26.508612491Z" level=info msg="connecting to shim 58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1" address="unix:///run/containerd/s/8ac34dc2669c9b051d1be24e8a43d2bf7940af09ef859a21e3d2e1422022b7c3" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:26.544449 systemd[1]: Started cri-containerd-58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1.scope - libcontainer container 58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1. May 14 09:27:26.621282 containerd[1530]: time="2025-05-14T09:27:26.620621289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-799xj,Uid:a37c9896-874d-4f95-aca0-71ce3a688347,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1\"" May 14 09:27:26.625397 containerd[1530]: time="2025-05-14T09:27:26.623780725Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 09:27:27.413762 kubelet[2795]: I0514 09:27:27.413312 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zgv5s" podStartSLOduration=2.413139183 podStartE2EDuration="2.413139183s" podCreationTimestamp="2025-05-14 09:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:27:26.945528709 +0000 UTC m=+5.273428057" watchObservedRunningTime="2025-05-14 09:27:27.413139183 +0000 UTC m=+5.741038581" May 14 09:27:28.634561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964283319.mount: Deactivated successfully. May 14 09:27:28.716996 sudo[1829]: pam_unix(sudo:session): session closed for user root May 14 09:27:28.983152 sshd[1828]: Connection closed by 172.24.4.1 port 34978 May 14 09:27:28.984418 sshd-session[1826]: pam_unix(sshd:session): session closed for user core May 14 09:27:28.992133 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. May 14 09:27:28.992553 systemd[1]: sshd@8-172.24.4.30:22-172.24.4.1:34978.service: Deactivated successfully. May 14 09:27:28.995716 systemd[1]: session-11.scope: Deactivated successfully. May 14 09:27:28.996645 systemd[1]: session-11.scope: Consumed 8.113s CPU time, 231.3M memory peak. May 14 09:27:29.002720 systemd-logind[1506]: Removed session 11. May 14 09:27:29.386071 containerd[1530]: time="2025-05-14T09:27:29.384361852Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:29.387566 containerd[1530]: time="2025-05-14T09:27:29.387202828Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 09:27:29.388539 containerd[1530]: time="2025-05-14T09:27:29.388509666Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:29.392780 containerd[1530]: time="2025-05-14T09:27:29.392738663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:29.393516 containerd[1530]: time="2025-05-14T09:27:29.393460807Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.769617943s" May 14 09:27:29.393771 containerd[1530]: time="2025-05-14T09:27:29.393621230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 09:27:29.397695 containerd[1530]: time="2025-05-14T09:27:29.397654218Z" level=info msg="CreateContainer within sandbox \"58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 09:27:29.410807 containerd[1530]: time="2025-05-14T09:27:29.410759040Z" level=info msg="Container f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:29.425719 containerd[1530]: time="2025-05-14T09:27:29.425643994Z" level=info msg="CreateContainer within sandbox \"58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d\"" May 14 09:27:29.427610 containerd[1530]: time="2025-05-14T09:27:29.427564773Z" level=info msg="StartContainer for \"f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d\"" May 14 09:27:29.428955 containerd[1530]: time="2025-05-14T09:27:29.428925643Z" level=info msg="connecting to shim f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d" address="unix:///run/containerd/s/8ac34dc2669c9b051d1be24e8a43d2bf7940af09ef859a21e3d2e1422022b7c3" protocol=ttrpc version=3 May 14 09:27:29.467541 systemd[1]: Started cri-containerd-f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d.scope - libcontainer container f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d. May 14 09:27:29.512260 containerd[1530]: time="2025-05-14T09:27:29.511556925Z" level=info msg="StartContainer for \"f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d\" returns successfully" May 14 09:27:29.975497 kubelet[2795]: I0514 09:27:29.973534 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-799xj" podStartSLOduration=1.200462778 podStartE2EDuration="3.972629799s" podCreationTimestamp="2025-05-14 09:27:26 +0000 UTC" firstStartedPulling="2025-05-14 09:27:26.623050053 +0000 UTC m=+4.950949411" lastFinishedPulling="2025-05-14 09:27:29.395217084 +0000 UTC m=+7.723116432" observedRunningTime="2025-05-14 09:27:29.967621861 +0000 UTC m=+8.295521299" watchObservedRunningTime="2025-05-14 09:27:29.972629799 +0000 UTC m=+8.300529197" May 14 09:27:33.106699 systemd[1]: Created slice kubepods-besteffort-pod53f26251_a1e3_4443_bba7_96dd288e7cd8.slice - libcontainer container kubepods-besteffort-pod53f26251_a1e3_4443_bba7_96dd288e7cd8.slice. May 14 09:27:33.143081 kubelet[2795]: I0514 09:27:33.142340 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f26251-a1e3-4443-bba7-96dd288e7cd8-tigera-ca-bundle\") pod \"calico-typha-7f6b49f6f6-gdmwd\" (UID: \"53f26251-a1e3-4443-bba7-96dd288e7cd8\") " pod="calico-system/calico-typha-7f6b49f6f6-gdmwd" May 14 09:27:33.143081 kubelet[2795]: I0514 09:27:33.142421 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53f26251-a1e3-4443-bba7-96dd288e7cd8-typha-certs\") pod \"calico-typha-7f6b49f6f6-gdmwd\" (UID: \"53f26251-a1e3-4443-bba7-96dd288e7cd8\") " pod="calico-system/calico-typha-7f6b49f6f6-gdmwd" May 14 09:27:33.143081 kubelet[2795]: I0514 09:27:33.142453 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqhn8\" (UniqueName: \"kubernetes.io/projected/53f26251-a1e3-4443-bba7-96dd288e7cd8-kube-api-access-vqhn8\") pod \"calico-typha-7f6b49f6f6-gdmwd\" (UID: \"53f26251-a1e3-4443-bba7-96dd288e7cd8\") " pod="calico-system/calico-typha-7f6b49f6f6-gdmwd" May 14 09:27:33.179604 systemd[1]: Created slice kubepods-besteffort-pod0e2e590c_07df_45f8_bed9_40a84022361e.slice - libcontainer container kubepods-besteffort-pod0e2e590c_07df_45f8_bed9_40a84022361e.slice. May 14 09:27:33.243257 kubelet[2795]: I0514 09:27:33.242726 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnfw\" (UniqueName: \"kubernetes.io/projected/0e2e590c-07df-45f8-bed9-40a84022361e-kube-api-access-nbnfw\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243257 kubelet[2795]: I0514 09:27:33.242834 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-var-run-calico\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243257 kubelet[2795]: I0514 09:27:33.242863 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-lib-modules\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243257 kubelet[2795]: I0514 09:27:33.242883 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-cni-net-dir\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243257 kubelet[2795]: I0514 09:27:33.242903 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-flexvol-driver-host\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243603 kubelet[2795]: I0514 09:27:33.242926 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2e590c-07df-45f8-bed9-40a84022361e-tigera-ca-bundle\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243603 kubelet[2795]: I0514 09:27:33.242945 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-cni-log-dir\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243603 kubelet[2795]: I0514 09:27:33.242989 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-policysync\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243603 kubelet[2795]: I0514 09:27:33.243009 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-cni-bin-dir\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243603 kubelet[2795]: I0514 09:27:33.243027 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-var-lib-calico\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243760 kubelet[2795]: I0514 09:27:33.243046 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e2e590c-07df-45f8-bed9-40a84022361e-xtables-lock\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.243760 kubelet[2795]: I0514 09:27:33.243083 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e2e590c-07df-45f8-bed9-40a84022361e-node-certs\") pod \"calico-node-jzjd9\" (UID: \"0e2e590c-07df-45f8-bed9-40a84022361e\") " pod="calico-system/calico-node-jzjd9" May 14 09:27:33.348357 kubelet[2795]: E0514 09:27:33.348285 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.348357 kubelet[2795]: W0514 09:27:33.348339 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.348584 kubelet[2795]: E0514 09:27:33.348411 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.351910 kubelet[2795]: E0514 09:27:33.351799 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.351910 kubelet[2795]: W0514 09:27:33.351845 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.351910 kubelet[2795]: E0514 09:27:33.351868 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.364530 kubelet[2795]: E0514 09:27:33.364376 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:33.381439 kubelet[2795]: E0514 09:27:33.379118 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.381439 kubelet[2795]: W0514 09:27:33.381354 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.381439 kubelet[2795]: E0514 09:27:33.381388 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.403888 kubelet[2795]: E0514 09:27:33.403831 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.403888 kubelet[2795]: W0514 09:27:33.403856 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.404346 kubelet[2795]: E0514 09:27:33.404148 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.404867 kubelet[2795]: E0514 09:27:33.404826 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.405133 kubelet[2795]: W0514 09:27:33.404967 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.405133 kubelet[2795]: E0514 09:27:33.405068 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.405848 kubelet[2795]: E0514 09:27:33.405832 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.406188 kubelet[2795]: W0514 09:27:33.406086 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.406188 kubelet[2795]: E0514 09:27:33.406110 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.407249 kubelet[2795]: E0514 09:27:33.407189 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.407249 kubelet[2795]: W0514 09:27:33.407204 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.407503 kubelet[2795]: E0514 09:27:33.407391 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.407936 kubelet[2795]: E0514 09:27:33.407910 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.408217 kubelet[2795]: W0514 09:27:33.408135 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.408217 kubelet[2795]: E0514 09:27:33.408161 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.408689 kubelet[2795]: E0514 09:27:33.408634 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.408689 kubelet[2795]: W0514 09:27:33.408649 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.408689 kubelet[2795]: E0514 09:27:33.408661 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.409291 kubelet[2795]: E0514 09:27:33.409270 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.409291 kubelet[2795]: W0514 09:27:33.409325 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.409291 kubelet[2795]: E0514 09:27:33.409340 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.409877 kubelet[2795]: E0514 09:27:33.409861 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.410161 kubelet[2795]: W0514 09:27:33.409946 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.410161 kubelet[2795]: E0514 09:27:33.409962 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.410595 kubelet[2795]: E0514 09:27:33.410549 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.410873 kubelet[2795]: W0514 09:27:33.410693 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.410873 kubelet[2795]: E0514 09:27:33.410796 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.411384 kubelet[2795]: E0514 09:27:33.411277 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.411558 kubelet[2795]: W0514 09:27:33.411483 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.411558 kubelet[2795]: E0514 09:27:33.411503 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.412142 kubelet[2795]: E0514 09:27:33.412052 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.412142 kubelet[2795]: W0514 09:27:33.412109 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.412449 kubelet[2795]: E0514 09:27:33.412268 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.413282 kubelet[2795]: E0514 09:27:33.413266 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.413386 kubelet[2795]: W0514 09:27:33.413371 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.413502 kubelet[2795]: E0514 09:27:33.413462 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.414726 containerd[1530]: time="2025-05-14T09:27:33.414601161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6b49f6f6-gdmwd,Uid:53f26251-a1e3-4443-bba7-96dd288e7cd8,Namespace:calico-system,Attempt:0,}" May 14 09:27:33.415660 kubelet[2795]: E0514 09:27:33.415117 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.415660 kubelet[2795]: W0514 09:27:33.415130 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.415660 kubelet[2795]: E0514 09:27:33.415378 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.417164 kubelet[2795]: E0514 09:27:33.416090 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.417164 kubelet[2795]: W0514 09:27:33.416100 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.417164 kubelet[2795]: E0514 09:27:33.416112 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.417629 kubelet[2795]: E0514 09:27:33.417524 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.417629 kubelet[2795]: W0514 09:27:33.417539 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.417629 kubelet[2795]: E0514 09:27:33.417552 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.418097 kubelet[2795]: E0514 09:27:33.418018 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.418097 kubelet[2795]: W0514 09:27:33.418033 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.418097 kubelet[2795]: E0514 09:27:33.418044 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.418801 kubelet[2795]: E0514 09:27:33.418661 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.418801 kubelet[2795]: W0514 09:27:33.418675 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.418801 kubelet[2795]: E0514 09:27:33.418686 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.419263 kubelet[2795]: E0514 09:27:33.419179 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.419263 kubelet[2795]: W0514 09:27:33.419194 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.419263 kubelet[2795]: E0514 09:27:33.419206 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.419899 kubelet[2795]: E0514 09:27:33.419801 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.419899 kubelet[2795]: W0514 09:27:33.419826 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.419899 kubelet[2795]: E0514 09:27:33.419838 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.420876 kubelet[2795]: E0514 09:27:33.420664 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.420876 kubelet[2795]: W0514 09:27:33.420678 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.420876 kubelet[2795]: E0514 09:27:33.420690 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.445971 kubelet[2795]: E0514 09:27:33.445282 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.445971 kubelet[2795]: W0514 09:27:33.445309 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.445971 kubelet[2795]: E0514 09:27:33.445333 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.446485 kubelet[2795]: I0514 09:27:33.446276 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/77407355-a1f4-47d6-8622-4499f5d39dce-kube-api-access-25m4d\") pod \"csi-node-driver-7kmcj\" (UID: \"77407355-a1f4-47d6-8622-4499f5d39dce\") " pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:33.446726 kubelet[2795]: E0514 09:27:33.446606 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.446726 kubelet[2795]: W0514 09:27:33.446665 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.446726 kubelet[2795]: E0514 09:27:33.446681 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.447187 kubelet[2795]: E0514 09:27:33.447009 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.447187 kubelet[2795]: W0514 09:27:33.447043 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.447187 kubelet[2795]: E0514 09:27:33.447070 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.447863 kubelet[2795]: E0514 09:27:33.447745 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.447863 kubelet[2795]: W0514 09:27:33.447760 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.447863 kubelet[2795]: E0514 09:27:33.447773 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.447863 kubelet[2795]: I0514 09:27:33.447802 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77407355-a1f4-47d6-8622-4499f5d39dce-varrun\") pod \"csi-node-driver-7kmcj\" (UID: \"77407355-a1f4-47d6-8622-4499f5d39dce\") " pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:33.448771 kubelet[2795]: E0514 09:27:33.448479 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.448771 kubelet[2795]: W0514 09:27:33.448659 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.448771 kubelet[2795]: E0514 09:27:33.448685 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.448771 kubelet[2795]: I0514 09:27:33.448723 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77407355-a1f4-47d6-8622-4499f5d39dce-socket-dir\") pod \"csi-node-driver-7kmcj\" (UID: \"77407355-a1f4-47d6-8622-4499f5d39dce\") " pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:33.449465 kubelet[2795]: E0514 09:27:33.449402 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.449465 kubelet[2795]: W0514 09:27:33.449430 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.449465 kubelet[2795]: E0514 09:27:33.449461 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.450745 kubelet[2795]: E0514 09:27:33.450713 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.450745 kubelet[2795]: W0514 09:27:33.450734 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.450992 kubelet[2795]: E0514 09:27:33.450806 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.450992 kubelet[2795]: E0514 09:27:33.450988 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.451225 kubelet[2795]: W0514 09:27:33.451000 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.451225 kubelet[2795]: E0514 09:27:33.451035 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.451225 kubelet[2795]: I0514 09:27:33.451090 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77407355-a1f4-47d6-8622-4499f5d39dce-registration-dir\") pod \"csi-node-driver-7kmcj\" (UID: \"77407355-a1f4-47d6-8622-4499f5d39dce\") " pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:33.452792 kubelet[2795]: E0514 09:27:33.452767 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.452792 kubelet[2795]: W0514 09:27:33.452784 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.453097 kubelet[2795]: E0514 09:27:33.452810 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.453502 kubelet[2795]: E0514 09:27:33.453312 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.453502 kubelet[2795]: W0514 09:27:33.453329 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.453502 kubelet[2795]: E0514 09:27:33.453341 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.454039 kubelet[2795]: E0514 09:27:33.453990 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.454214 kubelet[2795]: W0514 09:27:33.454154 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.454449 kubelet[2795]: E0514 09:27:33.454361 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.455327 kubelet[2795]: I0514 09:27:33.455223 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77407355-a1f4-47d6-8622-4499f5d39dce-kubelet-dir\") pod \"csi-node-driver-7kmcj\" (UID: \"77407355-a1f4-47d6-8622-4499f5d39dce\") " pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:33.455468 kubelet[2795]: E0514 09:27:33.455454 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.455610 kubelet[2795]: W0514 09:27:33.455534 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.455610 kubelet[2795]: E0514 09:27:33.455563 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.456131 kubelet[2795]: E0514 09:27:33.455975 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.456131 kubelet[2795]: W0514 09:27:33.455990 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.456131 kubelet[2795]: E0514 09:27:33.456009 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.459692 kubelet[2795]: E0514 09:27:33.456617 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.459692 kubelet[2795]: W0514 09:27:33.457836 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.459692 kubelet[2795]: E0514 09:27:33.457859 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.459692 kubelet[2795]: E0514 09:27:33.458046 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.459692 kubelet[2795]: W0514 09:27:33.458056 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.459692 kubelet[2795]: E0514 09:27:33.458066 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.489961 containerd[1530]: time="2025-05-14T09:27:33.488412943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzjd9,Uid:0e2e590c-07df-45f8-bed9-40a84022361e,Namespace:calico-system,Attempt:0,}" May 14 09:27:33.503861 containerd[1530]: time="2025-05-14T09:27:33.503800490Z" level=info msg="connecting to shim 58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab" address="unix:///run/containerd/s/b64e86c3e2d5e08f7f8f7d2a56432d21349e1bd029563d8a854d98c7a333805d" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:33.536420 containerd[1530]: time="2025-05-14T09:27:33.536371350Z" level=info msg="connecting to shim 4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19" address="unix:///run/containerd/s/73f41b3743ddd2662a5f845a1c17060268b4e176ae4f8ed188672468d655aafc" namespace=k8s.io protocol=ttrpc version=3 May 14 09:27:33.555569 systemd[1]: Started cri-containerd-58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab.scope - libcontainer container 58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab. May 14 09:27:33.557506 kubelet[2795]: E0514 09:27:33.557419 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.557506 kubelet[2795]: W0514 09:27:33.557441 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.557506 kubelet[2795]: E0514 09:27:33.557463 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.557788 kubelet[2795]: E0514 09:27:33.557731 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.557788 kubelet[2795]: W0514 09:27:33.557749 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.557788 kubelet[2795]: E0514 09:27:33.557760 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.559100 kubelet[2795]: E0514 09:27:33.558417 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.559100 kubelet[2795]: W0514 09:27:33.558462 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.559100 kubelet[2795]: E0514 09:27:33.558494 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.559283 kubelet[2795]: E0514 09:27:33.559104 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.559283 kubelet[2795]: W0514 09:27:33.559116 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.559283 kubelet[2795]: E0514 09:27:33.559128 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.559403 kubelet[2795]: E0514 09:27:33.559378 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.559455 kubelet[2795]: W0514 09:27:33.559402 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.559455 kubelet[2795]: E0514 09:27:33.559414 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.560757 kubelet[2795]: E0514 09:27:33.560050 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.560757 kubelet[2795]: W0514 09:27:33.560068 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.560757 kubelet[2795]: E0514 09:27:33.560152 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.560757 kubelet[2795]: E0514 09:27:33.560761 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.562351 kubelet[2795]: W0514 09:27:33.560772 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.562351 kubelet[2795]: E0514 09:27:33.560785 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.562351 kubelet[2795]: E0514 09:27:33.560916 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.562351 kubelet[2795]: W0514 09:27:33.560925 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.562351 kubelet[2795]: E0514 09:27:33.560934 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.562351 kubelet[2795]: E0514 09:27:33.561313 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.562351 kubelet[2795]: W0514 09:27:33.561324 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.562351 kubelet[2795]: E0514 09:27:33.561335 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.563348 kubelet[2795]: E0514 09:27:33.562822 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563348 kubelet[2795]: W0514 09:27:33.562842 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563348 kubelet[2795]: E0514 09:27:33.562992 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563348 kubelet[2795]: W0514 09:27:33.563001 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563348 kubelet[2795]: E0514 09:27:33.563114 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563348 kubelet[2795]: W0514 09:27:33.563123 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563348 kubelet[2795]: E0514 09:27:33.563201 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.563348 kubelet[2795]: E0514 09:27:33.563264 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563348 kubelet[2795]: W0514 09:27:33.563274 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563734 kubelet[2795]: E0514 09:27:33.563406 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563734 kubelet[2795]: W0514 09:27:33.563417 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563734 kubelet[2795]: E0514 09:27:33.563560 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563734 kubelet[2795]: W0514 09:27:33.563568 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563734 kubelet[2795]: E0514 09:27:33.563580 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.563899 kubelet[2795]: E0514 09:27:33.563762 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.563899 kubelet[2795]: W0514 09:27:33.563774 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.563899 kubelet[2795]: E0514 09:27:33.563783 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.563899 kubelet[2795]: E0514 09:27:33.563810 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.564064 kubelet[2795]: E0514 09:27:33.563941 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.564064 kubelet[2795]: W0514 09:27:33.563951 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.564064 kubelet[2795]: E0514 09:27:33.563962 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.564164 kubelet[2795]: E0514 09:27:33.564078 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.564164 kubelet[2795]: W0514 09:27:33.564087 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.564164 kubelet[2795]: E0514 09:27:33.564096 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.565526 kubelet[2795]: E0514 09:27:33.564299 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.565526 kubelet[2795]: W0514 09:27:33.564377 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.565526 kubelet[2795]: E0514 09:27:33.564401 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.565526 kubelet[2795]: E0514 09:27:33.564350 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.567157 kubelet[2795]: E0514 09:27:33.565751 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.567157 kubelet[2795]: E0514 09:27:33.564363 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.567157 kubelet[2795]: E0514 09:27:33.566180 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.567157 kubelet[2795]: W0514 09:27:33.567054 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.568588 kubelet[2795]: E0514 09:27:33.567104 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.568588 kubelet[2795]: E0514 09:27:33.568367 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.568588 kubelet[2795]: W0514 09:27:33.568380 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.568588 kubelet[2795]: E0514 09:27:33.568401 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.570099 kubelet[2795]: E0514 09:27:33.570084 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.570288 kubelet[2795]: W0514 09:27:33.570170 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.570288 kubelet[2795]: E0514 09:27:33.570196 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.571801 kubelet[2795]: E0514 09:27:33.571751 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.571801 kubelet[2795]: W0514 09:27:33.571768 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.571957 kubelet[2795]: E0514 09:27:33.571929 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.572320 kubelet[2795]: E0514 09:27:33.572289 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.572320 kubelet[2795]: W0514 09:27:33.572319 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.572437 kubelet[2795]: E0514 09:27:33.572346 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.574360 kubelet[2795]: E0514 09:27:33.574334 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.574360 kubelet[2795]: W0514 09:27:33.574357 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.574473 kubelet[2795]: E0514 09:27:33.574379 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.587494 kubelet[2795]: E0514 09:27:33.587393 2795 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 09:27:33.587494 kubelet[2795]: W0514 09:27:33.587427 2795 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 09:27:33.587494 kubelet[2795]: E0514 09:27:33.587450 2795 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 09:27:33.606427 systemd[1]: Started cri-containerd-4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19.scope - libcontainer container 4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19. May 14 09:27:33.773465 containerd[1530]: time="2025-05-14T09:27:33.773372867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jzjd9,Uid:0e2e590c-07df-45f8-bed9-40a84022361e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\"" May 14 09:27:33.778573 containerd[1530]: time="2025-05-14T09:27:33.777809464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 09:27:33.785620 containerd[1530]: time="2025-05-14T09:27:33.785542396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6b49f6f6-gdmwd,Uid:53f26251-a1e3-4443-bba7-96dd288e7cd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab\"" May 14 09:27:34.836988 kubelet[2795]: E0514 09:27:34.836659 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:36.831968 containerd[1530]: time="2025-05-14T09:27:36.831904985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:36.833852 containerd[1530]: time="2025-05-14T09:27:36.833811517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 09:27:36.837004 containerd[1530]: time="2025-05-14T09:27:36.835442530Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:36.837152 kubelet[2795]: E0514 09:27:36.836535 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:36.841942 containerd[1530]: time="2025-05-14T09:27:36.841104086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 3.063228096s" May 14 09:27:36.842285 containerd[1530]: time="2025-05-14T09:27:36.842044287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 09:27:36.842285 containerd[1530]: time="2025-05-14T09:27:36.841117711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:36.846721 containerd[1530]: time="2025-05-14T09:27:36.846545287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 09:27:36.849545 containerd[1530]: time="2025-05-14T09:27:36.849508489Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 09:27:36.870406 containerd[1530]: time="2025-05-14T09:27:36.869642952Z" level=info msg="Container b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:36.876611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266361304.mount: Deactivated successfully. May 14 09:27:36.892395 containerd[1530]: time="2025-05-14T09:27:36.892328450Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\"" May 14 09:27:36.894280 containerd[1530]: time="2025-05-14T09:27:36.894209204Z" level=info msg="StartContainer for \"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\"" May 14 09:27:36.898343 containerd[1530]: time="2025-05-14T09:27:36.898262109Z" level=info msg="connecting to shim b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4" address="unix:///run/containerd/s/73f41b3743ddd2662a5f845a1c17060268b4e176ae4f8ed188672468d655aafc" protocol=ttrpc version=3 May 14 09:27:36.943587 systemd[1]: Started cri-containerd-b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4.scope - libcontainer container b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4. May 14 09:27:37.033118 containerd[1530]: time="2025-05-14T09:27:37.033016633Z" level=info msg="StartContainer for \"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\" returns successfully" May 14 09:27:37.056459 systemd[1]: cri-containerd-b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4.scope: Deactivated successfully. May 14 09:27:37.064821 containerd[1530]: time="2025-05-14T09:27:37.064611179Z" level=info msg="received exit event container_id:\"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\" id:\"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\" pid:3359 exited_at:{seconds:1747214857 nanos:62689229}" May 14 09:27:37.065249 containerd[1530]: time="2025-05-14T09:27:37.064672886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\" id:\"b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4\" pid:3359 exited_at:{seconds:1747214857 nanos:62689229}" May 14 09:27:37.102865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4-rootfs.mount: Deactivated successfully. May 14 09:27:38.837262 kubelet[2795]: E0514 09:27:38.836118 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:40.602014 containerd[1530]: time="2025-05-14T09:27:40.601883860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:40.604025 containerd[1530]: time="2025-05-14T09:27:40.604002688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 09:27:40.605194 containerd[1530]: time="2025-05-14T09:27:40.605148564Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:40.609168 containerd[1530]: time="2025-05-14T09:27:40.608539035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:40.609414 containerd[1530]: time="2025-05-14T09:27:40.609154383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.76257357s" May 14 09:27:40.609505 containerd[1530]: time="2025-05-14T09:27:40.609489635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 09:27:40.610635 containerd[1530]: time="2025-05-14T09:27:40.610602659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 09:27:40.626374 containerd[1530]: time="2025-05-14T09:27:40.626199724Z" level=info msg="CreateContainer within sandbox \"58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 09:27:40.645279 containerd[1530]: time="2025-05-14T09:27:40.643305237Z" level=info msg="Container 99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:40.660610 containerd[1530]: time="2025-05-14T09:27:40.660458209Z" level=info msg="CreateContainer within sandbox \"58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a\"" May 14 09:27:40.662924 containerd[1530]: time="2025-05-14T09:27:40.661644522Z" level=info msg="StartContainer for \"99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a\"" May 14 09:27:40.664721 containerd[1530]: time="2025-05-14T09:27:40.664661290Z" level=info msg="connecting to shim 99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a" address="unix:///run/containerd/s/b64e86c3e2d5e08f7f8f7d2a56432d21349e1bd029563d8a854d98c7a333805d" protocol=ttrpc version=3 May 14 09:27:40.697678 systemd[1]: Started cri-containerd-99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a.scope - libcontainer container 99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a. May 14 09:27:40.779789 containerd[1530]: time="2025-05-14T09:27:40.779690658Z" level=info msg="StartContainer for \"99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a\" returns successfully" May 14 09:27:40.836077 kubelet[2795]: E0514 09:27:40.835745 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:42.005017 kubelet[2795]: I0514 09:27:42.004892 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:27:42.837510 kubelet[2795]: E0514 09:27:42.837351 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:44.837942 kubelet[2795]: E0514 09:27:44.837205 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:46.837128 kubelet[2795]: E0514 09:27:46.835751 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:47.066028 containerd[1530]: time="2025-05-14T09:27:47.065875526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:47.068186 containerd[1530]: time="2025-05-14T09:27:47.067960916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 09:27:47.069854 containerd[1530]: time="2025-05-14T09:27:47.069802638Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:47.072654 containerd[1530]: time="2025-05-14T09:27:47.072618220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:47.074036 containerd[1530]: time="2025-05-14T09:27:47.073622979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.46297317s" May 14 09:27:47.074036 containerd[1530]: time="2025-05-14T09:27:47.073680467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 09:27:47.079365 containerd[1530]: time="2025-05-14T09:27:47.078635540Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 09:27:47.095817 containerd[1530]: time="2025-05-14T09:27:47.095705125Z" level=info msg="Container d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:47.116520 containerd[1530]: time="2025-05-14T09:27:47.116469295Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\"" May 14 09:27:47.118637 containerd[1530]: time="2025-05-14T09:27:47.118611782Z" level=info msg="StartContainer for \"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\"" May 14 09:27:47.123585 containerd[1530]: time="2025-05-14T09:27:47.123540125Z" level=info msg="connecting to shim d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5" address="unix:///run/containerd/s/73f41b3743ddd2662a5f845a1c17060268b4e176ae4f8ed188672468d655aafc" protocol=ttrpc version=3 May 14 09:27:47.134964 kubelet[2795]: I0514 09:27:47.134933 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:27:47.158367 kubelet[2795]: I0514 09:27:47.158201 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f6b49f6f6-gdmwd" podStartSLOduration=7.335829873 podStartE2EDuration="14.1581183s" podCreationTimestamp="2025-05-14 09:27:33 +0000 UTC" firstStartedPulling="2025-05-14 09:27:33.788064182 +0000 UTC m=+12.115963530" lastFinishedPulling="2025-05-14 09:27:40.610352609 +0000 UTC m=+18.938251957" observedRunningTime="2025-05-14 09:27:41.016622935 +0000 UTC m=+19.344522283" watchObservedRunningTime="2025-05-14 09:27:47.1581183 +0000 UTC m=+25.486017648" May 14 09:27:47.171524 systemd[1]: Started cri-containerd-d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5.scope - libcontainer container d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5. May 14 09:27:47.246944 containerd[1530]: time="2025-05-14T09:27:47.246812455Z" level=info msg="StartContainer for \"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\" returns successfully" May 14 09:27:48.776687 containerd[1530]: time="2025-05-14T09:27:48.776585630Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 09:27:48.780842 systemd[1]: cri-containerd-d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5.scope: Deactivated successfully. May 14 09:27:48.781666 systemd[1]: cri-containerd-d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5.scope: Consumed 769ms CPU time, 172.8M memory peak, 154M written to disk. May 14 09:27:48.786923 containerd[1530]: time="2025-05-14T09:27:48.786850712Z" level=info msg="received exit event container_id:\"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\" id:\"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\" pid:3456 exited_at:{seconds:1747214868 nanos:785799487}" May 14 09:27:48.787637 containerd[1530]: time="2025-05-14T09:27:48.787367874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\" id:\"d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5\" pid:3456 exited_at:{seconds:1747214868 nanos:785799487}" May 14 09:27:48.794743 kubelet[2795]: I0514 09:27:48.794676 2795 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 09:27:48.880277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5-rootfs.mount: Deactivated successfully. May 14 09:27:48.885593 systemd[1]: Created slice kubepods-besteffort-pod77407355_a1f4_47d6_8622_4499f5d39dce.slice - libcontainer container kubepods-besteffort-pod77407355_a1f4_47d6_8622_4499f5d39dce.slice. May 14 09:27:48.900033 systemd[1]: Created slice kubepods-burstable-podc34df685_7180_4b26_bf13_5e0ba3f99796.slice - libcontainer container kubepods-burstable-podc34df685_7180_4b26_bf13_5e0ba3f99796.slice. May 14 09:27:49.054689 kubelet[2795]: I0514 09:27:48.987875 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmv5\" (UniqueName: \"kubernetes.io/projected/6ee78799-365e-4c37-a5fa-2411c2be967f-kube-api-access-fkmv5\") pod \"calico-apiserver-9b8dc9786-x97sx\" (UID: \"6ee78799-365e-4c37-a5fa-2411c2be967f\") " pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" May 14 09:27:49.054689 kubelet[2795]: I0514 09:27:48.987928 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ac2050-73bc-4a2f-b326-70d1f45ee87b-config-volume\") pod \"coredns-668d6bf9bc-8lhkd\" (UID: \"e1ac2050-73bc-4a2f-b326-70d1f45ee87b\") " pod="kube-system/coredns-668d6bf9bc-8lhkd" May 14 09:27:49.054689 kubelet[2795]: I0514 09:27:48.987973 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34df685-7180-4b26-bf13-5e0ba3f99796-config-volume\") pod \"coredns-668d6bf9bc-k29hf\" (UID: \"c34df685-7180-4b26-bf13-5e0ba3f99796\") " pod="kube-system/coredns-668d6bf9bc-k29hf" May 14 09:27:49.054689 kubelet[2795]: I0514 09:27:48.988010 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52e2d3c8-7cff-4d49-93a3-0467c68d9c00-tigera-ca-bundle\") pod \"calico-kube-controllers-574cbb6c8b-m89vn\" (UID: \"52e2d3c8-7cff-4d49-93a3-0467c68d9c00\") " pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" May 14 09:27:49.054689 kubelet[2795]: I0514 09:27:48.988032 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsldj\" (UniqueName: \"kubernetes.io/projected/52e2d3c8-7cff-4d49-93a3-0467c68d9c00-kube-api-access-rsldj\") pod \"calico-kube-controllers-574cbb6c8b-m89vn\" (UID: \"52e2d3c8-7cff-4d49-93a3-0467c68d9c00\") " pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" May 14 09:27:48.910331 systemd[1]: Created slice kubepods-besteffort-pod52e2d3c8_7cff_4d49_93a3_0467c68d9c00.slice - libcontainer container kubepods-besteffort-pod52e2d3c8_7cff_4d49_93a3_0467c68d9c00.slice. May 14 09:27:49.055364 kubelet[2795]: I0514 09:27:48.988184 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6ee78799-365e-4c37-a5fa-2411c2be967f-calico-apiserver-certs\") pod \"calico-apiserver-9b8dc9786-x97sx\" (UID: \"6ee78799-365e-4c37-a5fa-2411c2be967f\") " pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" May 14 09:27:49.055364 kubelet[2795]: I0514 09:27:48.988370 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmpzq\" (UniqueName: \"kubernetes.io/projected/e1ac2050-73bc-4a2f-b326-70d1f45ee87b-kube-api-access-gmpzq\") pod \"coredns-668d6bf9bc-8lhkd\" (UID: \"e1ac2050-73bc-4a2f-b326-70d1f45ee87b\") " pod="kube-system/coredns-668d6bf9bc-8lhkd" May 14 09:27:49.055364 kubelet[2795]: I0514 09:27:48.988416 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5rg\" (UniqueName: \"kubernetes.io/projected/c34df685-7180-4b26-bf13-5e0ba3f99796-kube-api-access-jx5rg\") pod \"coredns-668d6bf9bc-k29hf\" (UID: \"c34df685-7180-4b26-bf13-5e0ba3f99796\") " pod="kube-system/coredns-668d6bf9bc-k29hf" May 14 09:27:49.055364 kubelet[2795]: I0514 09:27:48.988466 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f051f547-b210-4647-989a-a45e046ab10d-calico-apiserver-certs\") pod \"calico-apiserver-9b8dc9786-v42zd\" (UID: \"f051f547-b210-4647-989a-a45e046ab10d\") " pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" May 14 09:27:49.055364 kubelet[2795]: I0514 09:27:48.988496 2795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsgz\" (UniqueName: \"kubernetes.io/projected/f051f547-b210-4647-989a-a45e046ab10d-kube-api-access-hxsgz\") pod \"calico-apiserver-9b8dc9786-v42zd\" (UID: \"f051f547-b210-4647-989a-a45e046ab10d\") " pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" May 14 09:27:48.917469 systemd[1]: Created slice kubepods-besteffort-pod6ee78799_365e_4c37_a5fa_2411c2be967f.slice - libcontainer container kubepods-besteffort-pod6ee78799_365e_4c37_a5fa_2411c2be967f.slice. May 14 09:27:48.925890 systemd[1]: Created slice kubepods-burstable-pode1ac2050_73bc_4a2f_b326_70d1f45ee87b.slice - libcontainer container kubepods-burstable-pode1ac2050_73bc_4a2f_b326_70d1f45ee87b.slice. May 14 09:27:48.932374 systemd[1]: Created slice kubepods-besteffort-podf051f547_b210_4647_989a_a45e046ab10d.slice - libcontainer container kubepods-besteffort-podf051f547_b210_4647_989a_a45e046ab10d.slice. May 14 09:27:49.060737 containerd[1530]: time="2025-05-14T09:27:49.060654034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kmcj,Uid:77407355-a1f4-47d6-8622-4499f5d39dce,Namespace:calico-system,Attempt:0,}" May 14 09:27:49.365307 containerd[1530]: time="2025-05-14T09:27:49.364805962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574cbb6c8b-m89vn,Uid:52e2d3c8-7cff-4d49-93a3-0467c68d9c00,Namespace:calico-system,Attempt:0,}" May 14 09:27:49.367420 containerd[1530]: time="2025-05-14T09:27:49.366737371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-x97sx,Uid:6ee78799-365e-4c37-a5fa-2411c2be967f,Namespace:calico-apiserver,Attempt:0,}" May 14 09:27:49.367420 containerd[1530]: time="2025-05-14T09:27:49.366838862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29hf,Uid:c34df685-7180-4b26-bf13-5e0ba3f99796,Namespace:kube-system,Attempt:0,}" May 14 09:27:49.367420 containerd[1530]: time="2025-05-14T09:27:49.367016967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-v42zd,Uid:f051f547-b210-4647-989a-a45e046ab10d,Namespace:calico-apiserver,Attempt:0,}" May 14 09:27:49.367420 containerd[1530]: time="2025-05-14T09:27:49.367403803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lhkd,Uid:e1ac2050-73bc-4a2f-b326-70d1f45ee87b,Namespace:kube-system,Attempt:0,}" May 14 09:27:49.915504 containerd[1530]: time="2025-05-14T09:27:49.915417215Z" level=error msg="Failed to destroy network for sandbox \"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.919421 containerd[1530]: time="2025-05-14T09:27:49.919354343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-x97sx,Uid:6ee78799-365e-4c37-a5fa-2411c2be967f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.919569 systemd[1]: run-netns-cni\x2d35a8292e\x2da462\x2d0d38\x2de32b\x2d024bb86447ac.mount: Deactivated successfully. May 14 09:27:49.921822 kubelet[2795]: E0514 09:27:49.919644 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.921822 kubelet[2795]: E0514 09:27:49.919772 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" May 14 09:27:49.921822 kubelet[2795]: E0514 09:27:49.919826 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" May 14 09:27:49.922664 kubelet[2795]: E0514 09:27:49.919884 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b8dc9786-x97sx_calico-apiserver(6ee78799-365e-4c37-a5fa-2411c2be967f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b8dc9786-x97sx_calico-apiserver(6ee78799-365e-4c37-a5fa-2411c2be967f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34bc0dfcd87e8743a53e6890e200a52a34994c7150603d7ac731d93f8b986b82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" podUID="6ee78799-365e-4c37-a5fa-2411c2be967f" May 14 09:27:49.938259 containerd[1530]: time="2025-05-14T09:27:49.936399543Z" level=error msg="Failed to destroy network for sandbox \"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.940677 containerd[1530]: time="2025-05-14T09:27:49.940253115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kmcj,Uid:77407355-a1f4-47d6-8622-4499f5d39dce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.940487 systemd[1]: run-netns-cni\x2d079cb0b7\x2dea86\x2d2e1e\x2d59fa\x2d639d8c08d301.mount: Deactivated successfully. May 14 09:27:49.941353 kubelet[2795]: E0514 09:27:49.941038 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.941353 kubelet[2795]: E0514 09:27:49.941103 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:49.941353 kubelet[2795]: E0514 09:27:49.941128 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kmcj" May 14 09:27:49.941513 kubelet[2795]: E0514 09:27:49.941170 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7kmcj_calico-system(77407355-a1f4-47d6-8622-4499f5d39dce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7kmcj_calico-system(77407355-a1f4-47d6-8622-4499f5d39dce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d3eeaecdfbebc24518f3184d535e678d13837c3e7379a7ea2296d4adc689ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kmcj" podUID="77407355-a1f4-47d6-8622-4499f5d39dce" May 14 09:27:49.972261 containerd[1530]: time="2025-05-14T09:27:49.972136879Z" level=error msg="Failed to destroy network for sandbox \"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.974488 systemd[1]: run-netns-cni\x2de18f4426\x2d4bd8\x2d51d2\x2d3b4c\x2d8520f69acee2.mount: Deactivated successfully. May 14 09:27:49.976669 containerd[1530]: time="2025-05-14T09:27:49.976625835Z" level=error msg="Failed to destroy network for sandbox \"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.979998 systemd[1]: run-netns-cni\x2dddb938d6\x2d7301\x2d72bd\x2d0ba2\x2d03d70bec7ec5.mount: Deactivated successfully. May 14 09:27:49.980748 containerd[1530]: time="2025-05-14T09:27:49.980441625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-v42zd,Uid:f051f547-b210-4647-989a-a45e046ab10d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.983025 kubelet[2795]: E0514 09:27:49.982106 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.983025 kubelet[2795]: E0514 09:27:49.982168 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" May 14 09:27:49.983025 kubelet[2795]: E0514 09:27:49.982199 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" May 14 09:27:49.983559 kubelet[2795]: E0514 09:27:49.983292 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b8dc9786-v42zd_calico-apiserver(f051f547-b210-4647-989a-a45e046ab10d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b8dc9786-v42zd_calico-apiserver(f051f547-b210-4647-989a-a45e046ab10d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ae8fec0961fccb7667a54cef4bbd18b563bd01c816053b6458f9b6c5262b71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" podUID="f051f547-b210-4647-989a-a45e046ab10d" May 14 09:27:49.986347 containerd[1530]: time="2025-05-14T09:27:49.986293522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lhkd,Uid:e1ac2050-73bc-4a2f-b326-70d1f45ee87b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.986942 kubelet[2795]: E0514 09:27:49.986708 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.986942 kubelet[2795]: E0514 09:27:49.986769 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8lhkd" May 14 09:27:49.986942 kubelet[2795]: E0514 09:27:49.986799 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8lhkd" May 14 09:27:49.987156 kubelet[2795]: E0514 09:27:49.986854 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8lhkd_kube-system(e1ac2050-73bc-4a2f-b326-70d1f45ee87b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8lhkd_kube-system(e1ac2050-73bc-4a2f-b326-70d1f45ee87b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be7de9d328b39755485e91937777b6cd0d01e3a129a932a3e6080e5a462faa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8lhkd" podUID="e1ac2050-73bc-4a2f-b326-70d1f45ee87b" May 14 09:27:49.993628 containerd[1530]: time="2025-05-14T09:27:49.993576558Z" level=error msg="Failed to destroy network for sandbox \"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.995459 containerd[1530]: time="2025-05-14T09:27:49.995419520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29hf,Uid:c34df685-7180-4b26-bf13-5e0ba3f99796,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.996173 kubelet[2795]: E0514 09:27:49.995621 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:49.996173 kubelet[2795]: E0514 09:27:49.995680 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k29hf" May 14 09:27:49.996173 kubelet[2795]: E0514 09:27:49.995704 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k29hf" May 14 09:27:49.996661 kubelet[2795]: E0514 09:27:49.995746 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k29hf_kube-system(c34df685-7180-4b26-bf13-5e0ba3f99796)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k29hf_kube-system(c34df685-7180-4b26-bf13-5e0ba3f99796)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64461010d7fe1515d9ee7533882fe7abe70383d260dfc450e6bc4cf75607151c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k29hf" podUID="c34df685-7180-4b26-bf13-5e0ba3f99796" May 14 09:27:49.997670 containerd[1530]: time="2025-05-14T09:27:49.997618112Z" level=error msg="Failed to destroy network for sandbox \"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:50.000187 containerd[1530]: time="2025-05-14T09:27:49.999337463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574cbb6c8b-m89vn,Uid:52e2d3c8-7cff-4d49-93a3-0467c68d9c00,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:50.000383 kubelet[2795]: E0514 09:27:49.999697 2795 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 09:27:50.000383 kubelet[2795]: E0514 09:27:49.999744 2795 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" May 14 09:27:50.000383 kubelet[2795]: E0514 09:27:49.999764 2795 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" May 14 09:27:50.000634 kubelet[2795]: E0514 09:27:49.999803 2795 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-574cbb6c8b-m89vn_calico-system(52e2d3c8-7cff-4d49-93a3-0467c68d9c00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-574cbb6c8b-m89vn_calico-system(52e2d3c8-7cff-4d49-93a3-0467c68d9c00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b08c87ed26e5b06c32b32f48d1aa13991f9e59c4af0adcee872cd6b42e2c978c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" podUID="52e2d3c8-7cff-4d49-93a3-0467c68d9c00" May 14 09:27:50.051989 containerd[1530]: time="2025-05-14T09:27:50.051899132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 09:27:50.880990 systemd[1]: run-netns-cni\x2d995ea301\x2ddce0\x2deb39\x2d1b41\x2d0445b80d601b.mount: Deactivated successfully. May 14 09:27:50.881275 systemd[1]: run-netns-cni\x2d1868c25b\x2d4aca\x2d21f5\x2d85a0\x2d9081654e18c5.mount: Deactivated successfully. May 14 09:27:59.466793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452500730.mount: Deactivated successfully. May 14 09:27:59.521470 containerd[1530]: time="2025-05-14T09:27:59.521321303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:59.523302 containerd[1530]: time="2025-05-14T09:27:59.523268700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 09:27:59.525044 containerd[1530]: time="2025-05-14T09:27:59.525000741Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:59.528327 containerd[1530]: time="2025-05-14T09:27:59.528295818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:27:59.529417 containerd[1530]: time="2025-05-14T09:27:59.529371347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.476903997s" May 14 09:27:59.529538 containerd[1530]: time="2025-05-14T09:27:59.529519335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 09:27:59.548497 containerd[1530]: time="2025-05-14T09:27:59.548441569Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 09:27:59.565268 containerd[1530]: time="2025-05-14T09:27:59.563374114Z" level=info msg="Container b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da: CDI devices from CRI Config.CDIDevices: []" May 14 09:27:59.584306 containerd[1530]: time="2025-05-14T09:27:59.584223347Z" level=info msg="CreateContainer within sandbox \"4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\"" May 14 09:27:59.586499 containerd[1530]: time="2025-05-14T09:27:59.586461208Z" level=info msg="StartContainer for \"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\"" May 14 09:27:59.589319 containerd[1530]: time="2025-05-14T09:27:59.589221591Z" level=info msg="connecting to shim b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da" address="unix:///run/containerd/s/73f41b3743ddd2662a5f845a1c17060268b4e176ae4f8ed188672468d655aafc" protocol=ttrpc version=3 May 14 09:27:59.642811 systemd[1]: Started cri-containerd-b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da.scope - libcontainer container b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da. May 14 09:27:59.732943 containerd[1530]: time="2025-05-14T09:27:59.732352046Z" level=info msg="StartContainer for \"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" returns successfully" May 14 09:27:59.843966 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 09:27:59.845089 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 09:28:00.143256 kubelet[2795]: I0514 09:28:00.142988 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jzjd9" podStartSLOduration=1.3878246779999999 podStartE2EDuration="27.14219773s" podCreationTimestamp="2025-05-14 09:27:33 +0000 UTC" firstStartedPulling="2025-05-14 09:27:33.776365411 +0000 UTC m=+12.104264769" lastFinishedPulling="2025-05-14 09:27:59.530738473 +0000 UTC m=+37.858637821" observedRunningTime="2025-05-14 09:28:00.141222288 +0000 UTC m=+38.469121666" watchObservedRunningTime="2025-05-14 09:28:00.14219773 +0000 UTC m=+38.470097078" May 14 09:28:01.838028 containerd[1530]: time="2025-05-14T09:28:01.837967577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574cbb6c8b-m89vn,Uid:52e2d3c8-7cff-4d49-93a3-0467c68d9c00,Namespace:calico-system,Attempt:0,}" May 14 09:28:02.327535 systemd-networkd[1447]: cali0829d1cf9d7: Link UP May 14 09:28:02.328545 systemd-networkd[1447]: cali0829d1cf9d7: Gained carrier May 14 09:28:02.431982 systemd-networkd[1447]: vxlan.calico: Link UP May 14 09:28:02.431994 systemd-networkd[1447]: vxlan.calico: Gained carrier May 14 09:28:02.623272 containerd[1530]: 2025-05-14 09:28:01.927 [INFO][3864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 09:28:02.623272 containerd[1530]: 2025-05-14 09:28:01.992 [INFO][3864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0 calico-kube-controllers-574cbb6c8b- calico-system 52e2d3c8-7cff-4d49-93a3-0467c68d9c00 683 0 2025-05-14 09:27:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:574cbb6c8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal calico-kube-controllers-574cbb6c8b-m89vn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0829d1cf9d7 [] []}} ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-" May 14 09:28:02.623272 containerd[1530]: 2025-05-14 09:28:01.992 [INFO][3864] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.623272 containerd[1530]: 2025-05-14 09:28:02.071 [INFO][3888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" HandleID="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.083 [INFO][3888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" HandleID="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"calico-kube-controllers-574cbb6c8b-m89vn", "timestamp":"2025-05-14 09:28:02.071157049 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.083 [INFO][3888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.083 [INFO][3888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.084 [INFO][3888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.086 [INFO][3888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.092 [INFO][3888] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.098 [INFO][3888] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.102 [INFO][3888] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.623917 containerd[1530]: 2025-05-14 09:28:02.105 [INFO][3888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.105 [INFO][3888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.107 [INFO][3888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294 May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.171 [INFO][3888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.190 [INFO][3888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.129/26] block=192.168.30.128/26 handle="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.190 [INFO][3888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.129/26] handle="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.190 [INFO][3888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:02.625543 containerd[1530]: 2025-05-14 09:28:02.190 [INFO][3888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.129/26] IPv6=[] ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" HandleID="k8s-pod-network.00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.625887 containerd[1530]: 2025-05-14 09:28:02.194 [INFO][3864] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0", GenerateName:"calico-kube-controllers-574cbb6c8b-", Namespace:"calico-system", SelfLink:"", UID:"52e2d3c8-7cff-4d49-93a3-0467c68d9c00", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574cbb6c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"calico-kube-controllers-574cbb6c8b-m89vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0829d1cf9d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:02.626014 containerd[1530]: 2025-05-14 09:28:02.194 [INFO][3864] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.129/32] ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.626014 containerd[1530]: 2025-05-14 09:28:02.194 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0829d1cf9d7 ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.626014 containerd[1530]: 2025-05-14 09:28:02.527 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.626186 containerd[1530]: 2025-05-14 09:28:02.527 [INFO][3864] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0", GenerateName:"calico-kube-controllers-574cbb6c8b-", Namespace:"calico-system", SelfLink:"", UID:"52e2d3c8-7cff-4d49-93a3-0467c68d9c00", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"574cbb6c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294", Pod:"calico-kube-controllers-574cbb6c8b-m89vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0829d1cf9d7", MAC:"52:c7:a8:47:b1:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:02.626351 containerd[1530]: 2025-05-14 09:28:02.615 [INFO][3864] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" Namespace="calico-system" Pod="calico-kube-controllers-574cbb6c8b-m89vn" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--kube--controllers--574cbb6c8b--m89vn-eth0" May 14 09:28:02.709980 containerd[1530]: time="2025-05-14T09:28:02.709919817Z" level=info msg="connecting to shim 00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294" address="unix:///run/containerd/s/faab16926645840842af5a7ded9313204f1f256395b48c2b9f333bc3b08db09e" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:02.811636 systemd[1]: Started cri-containerd-00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294.scope - libcontainer container 00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294. May 14 09:28:02.839552 containerd[1530]: time="2025-05-14T09:28:02.838486417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29hf,Uid:c34df685-7180-4b26-bf13-5e0ba3f99796,Namespace:kube-system,Attempt:0,}" May 14 09:28:02.840859 containerd[1530]: time="2025-05-14T09:28:02.840647563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-x97sx,Uid:6ee78799-365e-4c37-a5fa-2411c2be967f,Namespace:calico-apiserver,Attempt:0,}" May 14 09:28:02.996334 containerd[1530]: time="2025-05-14T09:28:02.995807631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-574cbb6c8b-m89vn,Uid:52e2d3c8-7cff-4d49-93a3-0467c68d9c00,Namespace:calico-system,Attempt:0,} returns sandbox id \"00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294\"" May 14 09:28:03.002428 containerd[1530]: time="2025-05-14T09:28:03.002380839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 09:28:03.128976 systemd-networkd[1447]: calidb9a0dfebc0: Link UP May 14 09:28:03.130169 systemd-networkd[1447]: calidb9a0dfebc0: Gained carrier May 14 09:28:03.157094 containerd[1530]: 2025-05-14 09:28:02.981 [INFO][4011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0 calico-apiserver-9b8dc9786- calico-apiserver 6ee78799-365e-4c37-a5fa-2411c2be967f 684 0 2025-05-14 09:27:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b8dc9786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal calico-apiserver-9b8dc9786-x97sx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb9a0dfebc0 [] []}} ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-" May 14 09:28:03.157094 containerd[1530]: 2025-05-14 09:28:02.982 [INFO][4011] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.157094 containerd[1530]: 2025-05-14 09:28:03.045 [INFO][4045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" HandleID="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.074 [INFO][4045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" HandleID="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031af20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"calico-apiserver-9b8dc9786-x97sx", "timestamp":"2025-05-14 09:28:03.045819382 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.075 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.075 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.075 [INFO][4045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.079 [INFO][4045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.089 [INFO][4045] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.095 [INFO][4045] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.098 [INFO][4045] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157695 containerd[1530]: 2025-05-14 09:28:03.101 [INFO][4045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.101 [INFO][4045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.103 [INFO][4045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37 May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.109 [INFO][4045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.130/26] block=192.168.30.128/26 handle="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.130/26] handle="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:03.157979 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.130/26] IPv6=[] ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" HandleID="k8s-pod-network.b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.158175 containerd[1530]: 2025-05-14 09:28:03.123 [INFO][4011] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0", GenerateName:"calico-apiserver-9b8dc9786-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ee78799-365e-4c37-a5fa-2411c2be967f", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b8dc9786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"calico-apiserver-9b8dc9786-x97sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9a0dfebc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:03.158262 containerd[1530]: 2025-05-14 09:28:03.123 [INFO][4011] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.130/32] ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.158262 containerd[1530]: 2025-05-14 09:28:03.123 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb9a0dfebc0 ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.158262 containerd[1530]: 2025-05-14 09:28:03.128 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.158438 containerd[1530]: 2025-05-14 09:28:03.129 [INFO][4011] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0", GenerateName:"calico-apiserver-9b8dc9786-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ee78799-365e-4c37-a5fa-2411c2be967f", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b8dc9786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37", Pod:"calico-apiserver-9b8dc9786-x97sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9a0dfebc0", MAC:"d6:23:c6:10:97:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:03.158522 containerd[1530]: 2025-05-14 09:28:03.150 [INFO][4011] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-x97sx" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--x97sx-eth0" May 14 09:28:03.236011 containerd[1530]: time="2025-05-14T09:28:03.235759647Z" level=info msg="connecting to shim b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37" address="unix:///run/containerd/s/482863213b0bb903e059f99b5cfb7927e4d85ee2c44e72cf2ba623834a251ed0" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:03.258742 systemd-networkd[1447]: calic48fde5b1d3: Link UP May 14 09:28:03.259984 systemd-networkd[1447]: calic48fde5b1d3: Gained carrier May 14 09:28:03.284168 containerd[1530]: 2025-05-14 09:28:02.990 [INFO][4004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0 coredns-668d6bf9bc- kube-system c34df685-7180-4b26-bf13-5e0ba3f99796 680 0 2025-05-14 09:27:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal coredns-668d6bf9bc-k29hf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic48fde5b1d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-" May 14 09:28:03.284168 containerd[1530]: 2025-05-14 09:28:02.990 [INFO][4004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.284168 containerd[1530]: 2025-05-14 09:28:03.065 [INFO][4043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" HandleID="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.086 [INFO][4043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" HandleID="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003038c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"coredns-668d6bf9bc-k29hf", "timestamp":"2025-05-14 09:28:03.065399854 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.086 [INFO][4043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.119 [INFO][4043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.184 [INFO][4043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.195 [INFO][4043] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.202 [INFO][4043] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.205 [INFO][4043] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284530 containerd[1530]: 2025-05-14 09:28:03.209 [INFO][4043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.210 [INFO][4043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.213 [INFO][4043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4 May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.227 [INFO][4043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.238 [INFO][4043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.131/26] block=192.168.30.128/26 handle="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.239 [INFO][4043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.131/26] handle="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.239 [INFO][4043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:03.284869 containerd[1530]: 2025-05-14 09:28:03.239 [INFO][4043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.131/26] IPv6=[] ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" HandleID="k8s-pod-network.a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.243 [INFO][4004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c34df685-7180-4b26-bf13-5e0ba3f99796", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-k29hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48fde5b1d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.243 [INFO][4004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.131/32] ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.243 [INFO][4004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic48fde5b1d3 ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.261 [INFO][4004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.262 [INFO][4004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c34df685-7180-4b26-bf13-5e0ba3f99796", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4", Pod:"coredns-668d6bf9bc-k29hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48fde5b1d3", MAC:"02:a2:95:db:83:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:03.285087 containerd[1530]: 2025-05-14 09:28:03.278 [INFO][4004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29hf" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--k29hf-eth0" May 14 09:28:03.303907 systemd[1]: Started cri-containerd-b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37.scope - libcontainer container b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37. May 14 09:28:03.330061 containerd[1530]: time="2025-05-14T09:28:03.330012878Z" level=info msg="connecting to shim a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4" address="unix:///run/containerd/s/d5162cad86338da60f7bc26b1fe0f9661601fe0cf05c22939c2d888d88684869" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:03.391575 systemd[1]: Started cri-containerd-a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4.scope - libcontainer container a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4. May 14 09:28:03.455165 containerd[1530]: time="2025-05-14T09:28:03.455077652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-x97sx,Uid:6ee78799-365e-4c37-a5fa-2411c2be967f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37\"" May 14 09:28:03.524729 containerd[1530]: time="2025-05-14T09:28:03.524498323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29hf,Uid:c34df685-7180-4b26-bf13-5e0ba3f99796,Namespace:kube-system,Attempt:0,} returns sandbox id \"a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4\"" May 14 09:28:03.537753 containerd[1530]: time="2025-05-14T09:28:03.537644668Z" level=info msg="CreateContainer within sandbox \"a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 09:28:03.566047 containerd[1530]: time="2025-05-14T09:28:03.565621160Z" level=info msg="Container 809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:03.579578 containerd[1530]: time="2025-05-14T09:28:03.579501844Z" level=info msg="CreateContainer within sandbox \"a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13\"" May 14 09:28:03.581493 containerd[1530]: time="2025-05-14T09:28:03.581461922Z" level=info msg="StartContainer for \"809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13\"" May 14 09:28:03.584060 containerd[1530]: time="2025-05-14T09:28:03.583763142Z" level=info msg="connecting to shim 809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13" address="unix:///run/containerd/s/d5162cad86338da60f7bc26b1fe0f9661601fe0cf05c22939c2d888d88684869" protocol=ttrpc version=3 May 14 09:28:03.623416 systemd[1]: Started cri-containerd-809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13.scope - libcontainer container 809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13. May 14 09:28:03.662586 containerd[1530]: time="2025-05-14T09:28:03.662514657Z" level=info msg="StartContainer for \"809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13\" returns successfully" May 14 09:28:03.696717 systemd-networkd[1447]: cali0829d1cf9d7: Gained IPv6LL May 14 09:28:03.822524 systemd-networkd[1447]: vxlan.calico: Gained IPv6LL May 14 09:28:03.839253 containerd[1530]: time="2025-05-14T09:28:03.839129008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kmcj,Uid:77407355-a1f4-47d6-8622-4499f5d39dce,Namespace:calico-system,Attempt:0,}" May 14 09:28:04.082875 systemd-networkd[1447]: cali53b261f7b10: Link UP May 14 09:28:04.084180 systemd-networkd[1447]: cali53b261f7b10: Gained carrier May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:03.940 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0 csi-node-driver- calico-system 77407355-a1f4-47d6-8622-4499f5d39dce 582 0 2025-05-14 09:27:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal csi-node-driver-7kmcj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53b261f7b10 [] []}} ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:03.941 [INFO][4207] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.010 [INFO][4220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" HandleID="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.024 [INFO][4220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" HandleID="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"csi-node-driver-7kmcj", "timestamp":"2025-05-14 09:28:04.009907997 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.025 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.025 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.025 [INFO][4220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.029 [INFO][4220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.035 [INFO][4220] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.043 [INFO][4220] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.046 [INFO][4220] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.051 [INFO][4220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.052 [INFO][4220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.054 [INFO][4220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62 May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.063 [INFO][4220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.073 [INFO][4220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.132/26] block=192.168.30.128/26 handle="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.073 [INFO][4220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.132/26] handle="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.073 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:04.108272 containerd[1530]: 2025-05-14 09:28:04.073 [INFO][4220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.132/26] IPv6=[] ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" HandleID="k8s-pod-network.d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.076 [INFO][4207] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77407355-a1f4-47d6-8622-4499f5d39dce", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"csi-node-driver-7kmcj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53b261f7b10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.076 [INFO][4207] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.132/32] ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.076 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53b261f7b10 ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.082 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.084 [INFO][4207] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77407355-a1f4-47d6-8622-4499f5d39dce", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62", Pod:"csi-node-driver-7kmcj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53b261f7b10", MAC:"be:52:77:38:bd:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:04.111635 containerd[1530]: 2025-05-14 09:28:04.103 [INFO][4207] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" Namespace="calico-system" Pod="csi-node-driver-7kmcj" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-csi--node--driver--7kmcj-eth0" May 14 09:28:04.161522 kubelet[2795]: I0514 09:28:04.161366 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k29hf" podStartSLOduration=39.160011393 podStartE2EDuration="39.160011393s" podCreationTimestamp="2025-05-14 09:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:28:04.157105689 +0000 UTC m=+42.485005057" watchObservedRunningTime="2025-05-14 09:28:04.160011393 +0000 UTC m=+42.487910741" May 14 09:28:04.206543 systemd-networkd[1447]: calidb9a0dfebc0: Gained IPv6LL May 14 09:28:04.210307 containerd[1530]: time="2025-05-14T09:28:04.210007400Z" level=info msg="connecting to shim d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62" address="unix:///run/containerd/s/5b8fb5bf6e62a6dfb956d7e5cc4fb8049c242e7973c39ed2ecb117c0f48e8ecc" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:04.269454 systemd[1]: Started cri-containerd-d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62.scope - libcontainer container d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62. May 14 09:28:04.326655 containerd[1530]: time="2025-05-14T09:28:04.326581857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kmcj,Uid:77407355-a1f4-47d6-8622-4499f5d39dce,Namespace:calico-system,Attempt:0,} returns sandbox id \"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62\"" May 14 09:28:04.837741 containerd[1530]: time="2025-05-14T09:28:04.837567800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-v42zd,Uid:f051f547-b210-4647-989a-a45e046ab10d,Namespace:calico-apiserver,Attempt:0,}" May 14 09:28:04.839285 containerd[1530]: time="2025-05-14T09:28:04.839114744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lhkd,Uid:e1ac2050-73bc-4a2f-b326-70d1f45ee87b,Namespace:kube-system,Attempt:0,}" May 14 09:28:05.164433 systemd-networkd[1447]: cali61869c95ab4: Link UP May 14 09:28:05.164700 systemd-networkd[1447]: cali61869c95ab4: Gained carrier May 14 09:28:05.168048 systemd-networkd[1447]: calic48fde5b1d3: Gained IPv6LL May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:04.980 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0 calico-apiserver-9b8dc9786- calico-apiserver f051f547-b210-4647-989a-a45e046ab10d 686 0 2025-05-14 09:27:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b8dc9786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal calico-apiserver-9b8dc9786-v42zd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61869c95ab4 [] []}} ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:04.981 [INFO][4293] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.068 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" HandleID="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.095 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" HandleID="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"calico-apiserver-9b8dc9786-v42zd", "timestamp":"2025-05-14 09:28:05.068810752 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.095 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.095 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.095 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.101 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.107 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.117 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.123 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.128 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.128 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.130 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.137 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.148 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.133/26] block=192.168.30.128/26 handle="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.148 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.133/26] handle="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.148 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:05.187034 containerd[1530]: 2025-05-14 09:28:05.148 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.133/26] IPv6=[] ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" HandleID="k8s-pod-network.eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.154 [INFO][4293] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0", GenerateName:"calico-apiserver-9b8dc9786-", Namespace:"calico-apiserver", SelfLink:"", UID:"f051f547-b210-4647-989a-a45e046ab10d", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b8dc9786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"calico-apiserver-9b8dc9786-v42zd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61869c95ab4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.154 [INFO][4293] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.133/32] ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.154 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61869c95ab4 ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.165 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.166 [INFO][4293] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0", GenerateName:"calico-apiserver-9b8dc9786-", Namespace:"calico-apiserver", SelfLink:"", UID:"f051f547-b210-4647-989a-a45e046ab10d", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b8dc9786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f", Pod:"calico-apiserver-9b8dc9786-v42zd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61869c95ab4", MAC:"66:65:38:7d:f6:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:05.189257 containerd[1530]: 2025-05-14 09:28:05.185 [INFO][4293] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" Namespace="calico-apiserver" Pod="calico-apiserver-9b8dc9786-v42zd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-calico--apiserver--9b8dc9786--v42zd-eth0" May 14 09:28:05.287309 containerd[1530]: time="2025-05-14T09:28:05.287258264Z" level=info msg="connecting to shim eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f" address="unix:///run/containerd/s/b92842dbe2804f50a6694d41b9f45fe5b985d5e8722228f723ad4421acf20454" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:05.318206 systemd-networkd[1447]: cali8bbce670efd: Link UP May 14 09:28:05.319375 systemd-networkd[1447]: cali8bbce670efd: Gained carrier May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:04.996 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0 coredns-668d6bf9bc- kube-system e1ac2050-73bc-4a2f-b326-70d1f45ee87b 685 0 2025-05-14 09:27:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334-0-0-n-74e04034e7.novalocal coredns-668d6bf9bc-8lhkd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8bbce670efd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:04.997 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.100 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" HandleID="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.117 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" HandleID="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cff40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334-0-0-n-74e04034e7.novalocal", "pod":"coredns-668d6bf9bc-8lhkd", "timestamp":"2025-05-14 09:28:05.100609191 +0000 UTC"}, Hostname:"ci-4334-0-0-n-74e04034e7.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.118 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.149 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.149 [INFO][4314] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-n-74e04034e7.novalocal' May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.206 [INFO][4314] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.227 [INFO][4314] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.240 [INFO][4314] ipam/ipam.go 489: Trying affinity for 192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.244 [INFO][4314] ipam/ipam.go 155: Attempting to load block cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.250 [INFO][4314] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.250 [INFO][4314] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.254 [INFO][4314] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.273 [INFO][4314] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.294 [INFO][4314] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.30.134/26] block=192.168.30.128/26 handle="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.294 [INFO][4314] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.134/26] handle="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" host="ci-4334-0-0-n-74e04034e7.novalocal" May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.294 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 09:28:05.347253 containerd[1530]: 2025-05-14 09:28:05.294 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.134/26] IPv6=[] ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" HandleID="k8s-pod-network.14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Workload="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.301 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1ac2050-73bc-4a2f-b326-70d1f45ee87b", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-8lhkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bbce670efd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.301 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.30.134/32] ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.301 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bbce670efd ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.318 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.320 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1ac2050-73bc-4a2f-b326-70d1f45ee87b", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 9, 27, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-n-74e04034e7.novalocal", ContainerID:"14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f", Pod:"coredns-668d6bf9bc-8lhkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bbce670efd", MAC:"c2:85:36:b8:3b:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 09:28:05.349579 containerd[1530]: 2025-05-14 09:28:05.339 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" Namespace="kube-system" Pod="coredns-668d6bf9bc-8lhkd" WorkloadEndpoint="ci--4334--0--0--n--74e04034e7.novalocal-k8s-coredns--668d6bf9bc--8lhkd-eth0" May 14 09:28:05.362715 systemd[1]: Started cri-containerd-eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f.scope - libcontainer container eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f. May 14 09:28:05.451422 containerd[1530]: time="2025-05-14T09:28:05.451328420Z" level=info msg="connecting to shim 14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f" address="unix:///run/containerd/s/12052bb518a6e6c38e2a047378dc2b7a5697bce8b63ddf301d374c97aa09dfc4" namespace=k8s.io protocol=ttrpc version=3 May 14 09:28:05.487892 systemd-networkd[1447]: cali53b261f7b10: Gained IPv6LL May 14 09:28:05.508520 systemd[1]: Started cri-containerd-14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f.scope - libcontainer container 14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f. May 14 09:28:05.518771 containerd[1530]: time="2025-05-14T09:28:05.518684895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b8dc9786-v42zd,Uid:f051f547-b210-4647-989a-a45e046ab10d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f\"" May 14 09:28:05.632379 containerd[1530]: time="2025-05-14T09:28:05.632333942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8lhkd,Uid:e1ac2050-73bc-4a2f-b326-70d1f45ee87b,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f\"" May 14 09:28:05.646362 containerd[1530]: time="2025-05-14T09:28:05.644730467Z" level=info msg="CreateContainer within sandbox \"14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 09:28:05.694674 containerd[1530]: time="2025-05-14T09:28:05.694340766Z" level=info msg="Container 667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:05.712214 containerd[1530]: time="2025-05-14T09:28:05.711966667Z" level=info msg="CreateContainer within sandbox \"14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d\"" May 14 09:28:05.714440 containerd[1530]: time="2025-05-14T09:28:05.714395525Z" level=info msg="StartContainer for \"667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d\"" May 14 09:28:05.717969 containerd[1530]: time="2025-05-14T09:28:05.717085825Z" level=info msg="connecting to shim 667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d" address="unix:///run/containerd/s/12052bb518a6e6c38e2a047378dc2b7a5697bce8b63ddf301d374c97aa09dfc4" protocol=ttrpc version=3 May 14 09:28:05.758513 systemd[1]: Started cri-containerd-667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d.scope - libcontainer container 667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d. May 14 09:28:05.843557 containerd[1530]: time="2025-05-14T09:28:05.843477917Z" level=info msg="StartContainer for \"667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d\" returns successfully" May 14 09:28:06.193528 kubelet[2795]: I0514 09:28:06.193019 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8lhkd" podStartSLOduration=40.193000425 podStartE2EDuration="40.193000425s" podCreationTimestamp="2025-05-14 09:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 09:28:06.190728743 +0000 UTC m=+44.518628101" watchObservedRunningTime="2025-05-14 09:28:06.193000425 +0000 UTC m=+44.520899783" May 14 09:28:06.262014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112474554.mount: Deactivated successfully. May 14 09:28:06.574445 systemd-networkd[1447]: cali8bbce670efd: Gained IPv6LL May 14 09:28:07.022776 systemd-networkd[1447]: cali61869c95ab4: Gained IPv6LL May 14 09:28:08.227304 containerd[1530]: time="2025-05-14T09:28:08.226789847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:08.228769 containerd[1530]: time="2025-05-14T09:28:08.228713818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 09:28:08.230683 containerd[1530]: time="2025-05-14T09:28:08.230622379Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:08.234183 containerd[1530]: time="2025-05-14T09:28:08.234152404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:08.235093 containerd[1530]: time="2025-05-14T09:28:08.234964047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 5.232352876s" May 14 09:28:08.235093 containerd[1530]: time="2025-05-14T09:28:08.234995777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 09:28:08.237829 containerd[1530]: time="2025-05-14T09:28:08.237708798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 09:28:08.269253 containerd[1530]: time="2025-05-14T09:28:08.268647186Z" level=info msg="CreateContainer within sandbox \"00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 09:28:08.284343 containerd[1530]: time="2025-05-14T09:28:08.284298176Z" level=info msg="Container 9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:08.301634 containerd[1530]: time="2025-05-14T09:28:08.301591539Z" level=info msg="CreateContainer within sandbox \"00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\"" May 14 09:28:08.302954 containerd[1530]: time="2025-05-14T09:28:08.302553394Z" level=info msg="StartContainer for \"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\"" May 14 09:28:08.304934 containerd[1530]: time="2025-05-14T09:28:08.304897753Z" level=info msg="connecting to shim 9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808" address="unix:///run/containerd/s/faab16926645840842af5a7ded9313204f1f256395b48c2b9f333bc3b08db09e" protocol=ttrpc version=3 May 14 09:28:08.358520 systemd[1]: Started cri-containerd-9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808.scope - libcontainer container 9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808. May 14 09:28:08.447191 containerd[1530]: time="2025-05-14T09:28:08.447118039Z" level=info msg="StartContainer for \"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" returns successfully" May 14 09:28:10.319559 containerd[1530]: time="2025-05-14T09:28:10.319425240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"0ee1e2ac94c6a042e9d53216909428c9269c6a1dfe5daa06b0bce6dcc3dd874d\" pid:4544 exited_at:{seconds:1747214890 nanos:317387786}" May 14 09:28:10.344788 kubelet[2795]: I0514 09:28:10.343869 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-574cbb6c8b-m89vn" podStartSLOduration=32.10781785 podStartE2EDuration="37.343731529s" podCreationTimestamp="2025-05-14 09:27:33 +0000 UTC" firstStartedPulling="2025-05-14 09:28:03.000502453 +0000 UTC m=+41.328401812" lastFinishedPulling="2025-05-14 09:28:08.236416143 +0000 UTC m=+46.564315491" observedRunningTime="2025-05-14 09:28:09.223495703 +0000 UTC m=+47.551395131" watchObservedRunningTime="2025-05-14 09:28:10.343731529 +0000 UTC m=+48.671630877" May 14 09:28:12.449608 containerd[1530]: time="2025-05-14T09:28:12.449537651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:12.451492 containerd[1530]: time="2025-05-14T09:28:12.451437496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 09:28:12.453159 containerd[1530]: time="2025-05-14T09:28:12.453102590Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:12.456295 containerd[1530]: time="2025-05-14T09:28:12.456226312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:12.457458 containerd[1530]: time="2025-05-14T09:28:12.456957032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.219009597s" May 14 09:28:12.457458 containerd[1530]: time="2025-05-14T09:28:12.457011915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 09:28:12.459840 containerd[1530]: time="2025-05-14T09:28:12.459807612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 09:28:12.460992 containerd[1530]: time="2025-05-14T09:28:12.460848405Z" level=info msg="CreateContainer within sandbox \"b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 09:28:12.476611 containerd[1530]: time="2025-05-14T09:28:12.476560155Z" level=info msg="Container d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:12.491519 containerd[1530]: time="2025-05-14T09:28:12.491407092Z" level=info msg="CreateContainer within sandbox \"b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2\"" May 14 09:28:12.494261 containerd[1530]: time="2025-05-14T09:28:12.493466457Z" level=info msg="StartContainer for \"d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2\"" May 14 09:28:12.495262 containerd[1530]: time="2025-05-14T09:28:12.495119558Z" level=info msg="connecting to shim d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2" address="unix:///run/containerd/s/482863213b0bb903e059f99b5cfb7927e4d85ee2c44e72cf2ba623834a251ed0" protocol=ttrpc version=3 May 14 09:28:12.536444 systemd[1]: Started cri-containerd-d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2.scope - libcontainer container d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2. May 14 09:28:12.605556 containerd[1530]: time="2025-05-14T09:28:12.605509146Z" level=info msg="StartContainer for \"d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2\" returns successfully" May 14 09:28:13.240029 kubelet[2795]: I0514 09:28:13.239810 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b8dc9786-x97sx" podStartSLOduration=32.241907584 podStartE2EDuration="41.239370414s" podCreationTimestamp="2025-05-14 09:27:32 +0000 UTC" firstStartedPulling="2025-05-14 09:28:03.460993887 +0000 UTC m=+41.788893235" lastFinishedPulling="2025-05-14 09:28:12.458456707 +0000 UTC m=+50.786356065" observedRunningTime="2025-05-14 09:28:13.235952641 +0000 UTC m=+51.563852039" watchObservedRunningTime="2025-05-14 09:28:13.239370414 +0000 UTC m=+51.567269812" May 14 09:28:14.220373 kubelet[2795]: I0514 09:28:14.220079 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:28:15.016548 containerd[1530]: time="2025-05-14T09:28:15.016490973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:15.018122 containerd[1530]: time="2025-05-14T09:28:15.018091556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 09:28:15.019969 containerd[1530]: time="2025-05-14T09:28:15.019601849Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:15.028713 containerd[1530]: time="2025-05-14T09:28:15.028482401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:15.030098 containerd[1530]: time="2025-05-14T09:28:15.029913556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.57006095s" May 14 09:28:15.030098 containerd[1530]: time="2025-05-14T09:28:15.029956837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 09:28:15.032490 containerd[1530]: time="2025-05-14T09:28:15.032467378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 09:28:15.036782 containerd[1530]: time="2025-05-14T09:28:15.035681107Z" level=info msg="CreateContainer within sandbox \"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 09:28:15.060729 containerd[1530]: time="2025-05-14T09:28:15.060651466Z" level=info msg="Container 7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:15.083940 containerd[1530]: time="2025-05-14T09:28:15.083849109Z" level=info msg="CreateContainer within sandbox \"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a\"" May 14 09:28:15.085185 containerd[1530]: time="2025-05-14T09:28:15.084983417Z" level=info msg="StartContainer for \"7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a\"" May 14 09:28:15.090553 containerd[1530]: time="2025-05-14T09:28:15.090453821Z" level=info msg="connecting to shim 7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a" address="unix:///run/containerd/s/5b8fb5bf6e62a6dfb956d7e5cc4fb8049c242e7973c39ed2ecb117c0f48e8ecc" protocol=ttrpc version=3 May 14 09:28:15.137445 systemd[1]: Started cri-containerd-7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a.scope - libcontainer container 7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a. May 14 09:28:15.198818 containerd[1530]: time="2025-05-14T09:28:15.198769411Z" level=info msg="StartContainer for \"7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a\" returns successfully" May 14 09:28:15.605489 containerd[1530]: time="2025-05-14T09:28:15.604791351Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:15.609817 containerd[1530]: time="2025-05-14T09:28:15.607045651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 09:28:15.612819 containerd[1530]: time="2025-05-14T09:28:15.612679331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 580.058615ms" May 14 09:28:15.612819 containerd[1530]: time="2025-05-14T09:28:15.612812922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 09:28:15.618285 containerd[1530]: time="2025-05-14T09:28:15.618185491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 09:28:15.628716 containerd[1530]: time="2025-05-14T09:28:15.628621020Z" level=info msg="CreateContainer within sandbox \"eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 09:28:15.649725 containerd[1530]: time="2025-05-14T09:28:15.649600412Z" level=info msg="Container 429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:15.677037 containerd[1530]: time="2025-05-14T09:28:15.675848748Z" level=info msg="CreateContainer within sandbox \"eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399\"" May 14 09:28:15.679135 containerd[1530]: time="2025-05-14T09:28:15.679064371Z" level=info msg="StartContainer for \"429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399\"" May 14 09:28:15.689100 containerd[1530]: time="2025-05-14T09:28:15.688999572Z" level=info msg="connecting to shim 429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399" address="unix:///run/containerd/s/b92842dbe2804f50a6694d41b9f45fe5b985d5e8722228f723ad4421acf20454" protocol=ttrpc version=3 May 14 09:28:15.738436 systemd[1]: Started cri-containerd-429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399.scope - libcontainer container 429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399. May 14 09:28:15.808067 containerd[1530]: time="2025-05-14T09:28:15.808003725Z" level=info msg="StartContainer for \"429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399\" returns successfully" May 14 09:28:16.263978 kubelet[2795]: I0514 09:28:16.263537 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b8dc9786-v42zd" podStartSLOduration=34.167588867 podStartE2EDuration="44.263037229s" podCreationTimestamp="2025-05-14 09:27:32 +0000 UTC" firstStartedPulling="2025-05-14 09:28:05.520712661 +0000 UTC m=+43.848612009" lastFinishedPulling="2025-05-14 09:28:15.616160973 +0000 UTC m=+53.944060371" observedRunningTime="2025-05-14 09:28:16.258125224 +0000 UTC m=+54.586024572" watchObservedRunningTime="2025-05-14 09:28:16.263037229 +0000 UTC m=+54.590936618" May 14 09:28:17.243115 kubelet[2795]: I0514 09:28:17.242915 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:28:18.230254 containerd[1530]: time="2025-05-14T09:28:18.230140922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:18.231724 containerd[1530]: time="2025-05-14T09:28:18.231571496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 09:28:18.233148 containerd[1530]: time="2025-05-14T09:28:18.233106546Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:18.236000 containerd[1530]: time="2025-05-14T09:28:18.235920606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 09:28:18.236908 containerd[1530]: time="2025-05-14T09:28:18.236571386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.617463004s" May 14 09:28:18.236908 containerd[1530]: time="2025-05-14T09:28:18.236625608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 09:28:18.240898 containerd[1530]: time="2025-05-14T09:28:18.240826048Z" level=info msg="CreateContainer within sandbox \"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 09:28:18.253265 containerd[1530]: time="2025-05-14T09:28:18.252322316Z" level=info msg="Container bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4: CDI devices from CRI Config.CDIDevices: []" May 14 09:28:18.280202 containerd[1530]: time="2025-05-14T09:28:18.280086403Z" level=info msg="CreateContainer within sandbox \"d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4\"" May 14 09:28:18.281163 containerd[1530]: time="2025-05-14T09:28:18.281130682Z" level=info msg="StartContainer for \"bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4\"" May 14 09:28:18.283487 containerd[1530]: time="2025-05-14T09:28:18.283434404Z" level=info msg="connecting to shim bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4" address="unix:///run/containerd/s/5b8fb5bf6e62a6dfb956d7e5cc4fb8049c242e7973c39ed2ecb117c0f48e8ecc" protocol=ttrpc version=3 May 14 09:28:18.321453 systemd[1]: Started cri-containerd-bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4.scope - libcontainer container bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4. May 14 09:28:18.379997 containerd[1530]: time="2025-05-14T09:28:18.379918905Z" level=info msg="StartContainer for \"bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4\" returns successfully" May 14 09:28:18.997288 kubelet[2795]: I0514 09:28:18.996398 2795 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 09:28:18.997288 kubelet[2795]: I0514 09:28:18.996561 2795 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 09:28:19.310973 kubelet[2795]: I0514 09:28:19.310614 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7kmcj" podStartSLOduration=32.402632005 podStartE2EDuration="46.310555967s" podCreationTimestamp="2025-05-14 09:27:33 +0000 UTC" firstStartedPulling="2025-05-14 09:28:04.329903411 +0000 UTC m=+42.657802759" lastFinishedPulling="2025-05-14 09:28:18.237827373 +0000 UTC m=+56.565726721" observedRunningTime="2025-05-14 09:28:19.308211257 +0000 UTC m=+57.636110655" watchObservedRunningTime="2025-05-14 09:28:19.310555967 +0000 UTC m=+57.638455365" May 14 09:28:30.285788 containerd[1530]: time="2025-05-14T09:28:30.285731333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"1a394a59b6cd09560877dc6f465d65f9e2c7aba403e7afea5569c2fd2415ef32\" pid:4739 exited_at:{seconds:1747214910 nanos:282695959}" May 14 09:28:30.382373 containerd[1530]: time="2025-05-14T09:28:30.382307597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"82426b2a38b6344aecf32a22802cdf4f4a9a699e858f2bf29b7e930586a56523\" pid:4765 exited_at:{seconds:1747214910 nanos:381989766}" May 14 09:28:32.385990 kubelet[2795]: I0514 09:28:32.384967 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:28:40.303125 containerd[1530]: time="2025-05-14T09:28:40.302815132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"9e2481891c4a3ae281e125a660d1ae8ec819328ad7141b5b8086db870ee03607\" pid:4794 exited_at:{seconds:1747214920 nanos:302225650}" May 14 09:28:42.050028 containerd[1530]: time="2025-05-14T09:28:42.049968416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"20e72f037e0ef838af95a43d86a5e63d0ff3f7130787a2f938a4dbbeba601287\" pid:4816 exited_at:{seconds:1747214922 nanos:49158236}" May 14 09:28:48.312589 kubelet[2795]: I0514 09:28:48.311075 2795 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 09:29:00.475499 containerd[1530]: time="2025-05-14T09:29:00.474849630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"d75d810bdbea7b770750add018ff8face1908651c98b800032230e2ef44f768f\" pid:4848 exited_at:{seconds:1747214940 nanos:469049717}" May 14 09:29:10.309281 containerd[1530]: time="2025-05-14T09:29:10.309117648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"41911bca1ca817fe194233407d7f6c3fc07bbff80b336b5f16948d213a582ba5\" pid:4872 exited_at:{seconds:1747214950 nanos:308691026}" May 14 09:29:30.497091 containerd[1530]: time="2025-05-14T09:29:30.496513671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"604079c50608e2bea3e6f9fd877b61f312c6c81d5b1a199b1c3e80fb3255f5a4\" pid:4905 exited_at:{seconds:1747214970 nanos:496034119}" May 14 09:29:40.313498 containerd[1530]: time="2025-05-14T09:29:40.313363807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"62001dccd68425a1440e568a28af4f03693d11252dc8dea34ec43fe9acec34b6\" pid:4946 exited_at:{seconds:1747214980 nanos:312660946}" May 14 09:29:42.020305 containerd[1530]: time="2025-05-14T09:29:42.020226607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"e4de420aa88f1d5eb2238b147fbd0eff01b352c5db606d356b8e89e9b14ebba0\" pid:4967 exited_at:{seconds:1747214982 nanos:18698886}" May 14 09:30:00.449763 containerd[1530]: time="2025-05-14T09:30:00.449644948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"c3993d63112df530a621b59ab932c18d3bfc5dfa643745706ff6bb8a1520f644\" pid:4993 exited_at:{seconds:1747215000 nanos:447989258}" May 14 09:30:10.313112 containerd[1530]: time="2025-05-14T09:30:10.313031470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"8d7038e3698af4c345c6ba8d9638325035feaef1b95b8a3a9789c71c07cc3eb0\" pid:5016 exited_at:{seconds:1747215010 nanos:312796830}" May 14 09:30:30.443495 containerd[1530]: time="2025-05-14T09:30:30.443345192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"fa0cd5d4e089cf3c80d18bed5a35c1278ca7e7ede21214af462ca913c14d97a5\" pid:5043 exited_at:{seconds:1747215030 nanos:442688399}" May 14 09:30:40.351610 containerd[1530]: time="2025-05-14T09:30:40.351148067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"ee8f44de2b981fb2c45c7c4e9f48f566575df380117b9b5f3b06bc65d688b918\" pid:5068 exited_at:{seconds:1747215040 nanos:350135431}" May 14 09:30:42.006793 containerd[1530]: time="2025-05-14T09:30:42.006682040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"4d38b6731fee80559b15729bed2394a69bd6d566ca7aacf35c3817558907986d\" pid:5089 exited_at:{seconds:1747215042 nanos:6098397}" May 14 09:30:49.218531 systemd[1]: Started sshd@9-172.24.4.30:22-172.24.4.1:59608.service - OpenSSH per-connection server daemon (172.24.4.1:59608). May 14 09:30:50.523347 sshd[5111]: Accepted publickey for core from 172.24.4.1 port 59608 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:30:50.530221 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:30:50.555040 systemd-logind[1506]: New session 12 of user core. May 14 09:30:50.569703 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 09:30:51.341312 sshd[5113]: Connection closed by 172.24.4.1 port 59608 May 14 09:30:51.342765 sshd-session[5111]: pam_unix(sshd:session): session closed for user core May 14 09:30:51.350976 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. May 14 09:30:51.351506 systemd[1]: sshd@9-172.24.4.30:22-172.24.4.1:59608.service: Deactivated successfully. May 14 09:30:51.358996 systemd[1]: session-12.scope: Deactivated successfully. May 14 09:30:51.369308 systemd-logind[1506]: Removed session 12. May 14 09:30:56.364751 systemd[1]: Started sshd@10-172.24.4.30:22-172.24.4.1:47770.service - OpenSSH per-connection server daemon (172.24.4.1:47770). May 14 09:30:57.638335 sshd[5126]: Accepted publickey for core from 172.24.4.1 port 47770 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:30:57.642006 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:30:57.655639 systemd-logind[1506]: New session 13 of user core. May 14 09:30:57.672633 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 09:30:58.425290 sshd[5130]: Connection closed by 172.24.4.1 port 47770 May 14 09:30:58.426512 sshd-session[5126]: pam_unix(sshd:session): session closed for user core May 14 09:30:58.435445 systemd[1]: sshd@10-172.24.4.30:22-172.24.4.1:47770.service: Deactivated successfully. May 14 09:30:58.440887 systemd[1]: session-13.scope: Deactivated successfully. May 14 09:30:58.443646 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. May 14 09:30:58.447712 systemd-logind[1506]: Removed session 13. May 14 09:31:00.426222 containerd[1530]: time="2025-05-14T09:31:00.426038860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"ebfdd0d4a3dcf07bbc572e1c72f1f152645ce9e3e9dc20be8ba85675d52c22a8\" pid:5154 exited_at:{seconds:1747215060 nanos:425537281}" May 14 09:31:03.461559 systemd[1]: Started sshd@11-172.24.4.30:22-172.24.4.1:47786.service - OpenSSH per-connection server daemon (172.24.4.1:47786). May 14 09:31:05.067547 sshd[5166]: Accepted publickey for core from 172.24.4.1 port 47786 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:05.070788 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:05.083722 systemd-logind[1506]: New session 14 of user core. May 14 09:31:05.093608 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 09:31:05.730495 sshd[5168]: Connection closed by 172.24.4.1 port 47786 May 14 09:31:05.732690 sshd-session[5166]: pam_unix(sshd:session): session closed for user core May 14 09:31:05.741529 systemd[1]: Started sshd@12-172.24.4.30:22-172.24.4.1:49244.service - OpenSSH per-connection server daemon (172.24.4.1:49244). May 14 09:31:05.743510 systemd[1]: sshd@11-172.24.4.30:22-172.24.4.1:47786.service: Deactivated successfully. May 14 09:31:05.748889 systemd[1]: session-14.scope: Deactivated successfully. May 14 09:31:05.754037 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. May 14 09:31:05.757557 systemd-logind[1506]: Removed session 14. May 14 09:31:06.930173 sshd[5179]: Accepted publickey for core from 172.24.4.1 port 49244 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:06.935824 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:06.955745 systemd-logind[1506]: New session 15 of user core. May 14 09:31:06.971651 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 09:31:07.811982 sshd[5184]: Connection closed by 172.24.4.1 port 49244 May 14 09:31:07.811423 sshd-session[5179]: pam_unix(sshd:session): session closed for user core May 14 09:31:07.835591 systemd[1]: sshd@12-172.24.4.30:22-172.24.4.1:49244.service: Deactivated successfully. May 14 09:31:07.842248 systemd[1]: session-15.scope: Deactivated successfully. May 14 09:31:07.844600 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. May 14 09:31:07.849226 systemd[1]: Started sshd@13-172.24.4.30:22-172.24.4.1:49256.service - OpenSSH per-connection server daemon (172.24.4.1:49256). May 14 09:31:07.852573 systemd-logind[1506]: Removed session 15. May 14 09:31:08.841369 sshd[5194]: Accepted publickey for core from 172.24.4.1 port 49256 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:08.844799 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:08.859432 systemd-logind[1506]: New session 16 of user core. May 14 09:31:08.867784 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 09:31:09.573637 sshd[5196]: Connection closed by 172.24.4.1 port 49256 May 14 09:31:09.575566 sshd-session[5194]: pam_unix(sshd:session): session closed for user core May 14 09:31:09.591892 systemd[1]: sshd@13-172.24.4.30:22-172.24.4.1:49256.service: Deactivated successfully. May 14 09:31:09.600294 systemd[1]: session-16.scope: Deactivated successfully. May 14 09:31:09.603935 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. May 14 09:31:09.609417 systemd-logind[1506]: Removed session 16. May 14 09:31:10.318214 containerd[1530]: time="2025-05-14T09:31:10.317783837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"4ca0d9d56936db6d17f19a0f2d0756bd8e95345821c0bf232a4c0783cacf6a62\" pid:5225 exited_at:{seconds:1747215070 nanos:316638471}" May 14 09:31:14.600736 systemd[1]: Started sshd@14-172.24.4.30:22-172.24.4.1:47982.service - OpenSSH per-connection server daemon (172.24.4.1:47982). May 14 09:31:15.895734 sshd[5247]: Accepted publickey for core from 172.24.4.1 port 47982 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:15.918623 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:15.942624 systemd-logind[1506]: New session 17 of user core. May 14 09:31:15.953544 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 09:31:16.739215 sshd[5255]: Connection closed by 172.24.4.1 port 47982 May 14 09:31:16.740635 sshd-session[5247]: pam_unix(sshd:session): session closed for user core May 14 09:31:16.748965 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. May 14 09:31:16.749371 systemd[1]: sshd@14-172.24.4.30:22-172.24.4.1:47982.service: Deactivated successfully. May 14 09:31:16.758052 systemd[1]: session-17.scope: Deactivated successfully. May 14 09:31:16.765905 systemd-logind[1506]: Removed session 17. May 14 09:31:21.765115 systemd[1]: Started sshd@15-172.24.4.30:22-172.24.4.1:47990.service - OpenSSH per-connection server daemon (172.24.4.1:47990). May 14 09:31:23.003561 sshd[5268]: Accepted publickey for core from 172.24.4.1 port 47990 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:23.005758 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:23.017307 systemd-logind[1506]: New session 18 of user core. May 14 09:31:23.024603 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 09:31:23.828889 sshd[5272]: Connection closed by 172.24.4.1 port 47990 May 14 09:31:23.828600 sshd-session[5268]: pam_unix(sshd:session): session closed for user core May 14 09:31:23.840469 systemd[1]: sshd@15-172.24.4.30:22-172.24.4.1:47990.service: Deactivated successfully. May 14 09:31:23.852105 systemd[1]: session-18.scope: Deactivated successfully. May 14 09:31:23.857853 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. May 14 09:31:23.862775 systemd-logind[1506]: Removed session 18. May 14 09:31:28.862085 systemd[1]: Started sshd@16-172.24.4.30:22-172.24.4.1:38808.service - OpenSSH per-connection server daemon (172.24.4.1:38808). May 14 09:31:30.011518 sshd[5286]: Accepted publickey for core from 172.24.4.1 port 38808 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:30.016849 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:30.036488 systemd-logind[1506]: New session 19 of user core. May 14 09:31:30.050687 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 09:31:30.457312 containerd[1530]: time="2025-05-14T09:31:30.457090444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"bf115f3d08dfa5b76f8e6a1f63868010fa26d668efbfd12437303c31cfb21c1b\" pid:5305 exited_at:{seconds:1747215090 nanos:456394099}" May 14 09:31:30.768200 sshd[5288]: Connection closed by 172.24.4.1 port 38808 May 14 09:31:30.767855 sshd-session[5286]: pam_unix(sshd:session): session closed for user core May 14 09:31:30.791776 systemd[1]: sshd@16-172.24.4.30:22-172.24.4.1:38808.service: Deactivated successfully. May 14 09:31:30.798890 systemd[1]: session-19.scope: Deactivated successfully. May 14 09:31:30.802820 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. May 14 09:31:30.811793 systemd[1]: Started sshd@17-172.24.4.30:22-172.24.4.1:38812.service - OpenSSH per-connection server daemon (172.24.4.1:38812). May 14 09:31:30.815601 systemd-logind[1506]: Removed session 19. May 14 09:31:31.963758 sshd[5325]: Accepted publickey for core from 172.24.4.1 port 38812 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:31.967458 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:31.986906 systemd-logind[1506]: New session 20 of user core. May 14 09:31:31.996617 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 09:31:33.047579 sshd[5327]: Connection closed by 172.24.4.1 port 38812 May 14 09:31:33.049279 sshd-session[5325]: pam_unix(sshd:session): session closed for user core May 14 09:31:33.071056 systemd[1]: sshd@17-172.24.4.30:22-172.24.4.1:38812.service: Deactivated successfully. May 14 09:31:33.078742 systemd[1]: session-20.scope: Deactivated successfully. May 14 09:31:33.082793 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. May 14 09:31:33.091118 systemd-logind[1506]: Removed session 20. May 14 09:31:33.095003 systemd[1]: Started sshd@18-172.24.4.30:22-172.24.4.1:38818.service - OpenSSH per-connection server daemon (172.24.4.1:38818). May 14 09:31:34.413286 sshd[5337]: Accepted publickey for core from 172.24.4.1 port 38818 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:34.414483 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:34.421273 systemd-logind[1506]: New session 21 of user core. May 14 09:31:34.424366 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 09:31:36.623297 sshd[5340]: Connection closed by 172.24.4.1 port 38818 May 14 09:31:36.624759 sshd-session[5337]: pam_unix(sshd:session): session closed for user core May 14 09:31:36.643969 systemd[1]: sshd@18-172.24.4.30:22-172.24.4.1:38818.service: Deactivated successfully. May 14 09:31:36.649137 systemd[1]: session-21.scope: Deactivated successfully. May 14 09:31:36.653051 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. May 14 09:31:36.663851 systemd[1]: Started sshd@19-172.24.4.30:22-172.24.4.1:52472.service - OpenSSH per-connection server daemon (172.24.4.1:52472). May 14 09:31:36.667155 systemd-logind[1506]: Removed session 21. May 14 09:31:37.715335 sshd[5359]: Accepted publickey for core from 172.24.4.1 port 52472 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:37.718134 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:37.738518 systemd-logind[1506]: New session 22 of user core. May 14 09:31:37.747579 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 09:31:38.689730 sshd[5361]: Connection closed by 172.24.4.1 port 52472 May 14 09:31:38.691518 sshd-session[5359]: pam_unix(sshd:session): session closed for user core May 14 09:31:38.714202 systemd[1]: sshd@19-172.24.4.30:22-172.24.4.1:52472.service: Deactivated successfully. May 14 09:31:38.721443 systemd[1]: session-22.scope: Deactivated successfully. May 14 09:31:38.724352 systemd-logind[1506]: Session 22 logged out. Waiting for processes to exit. May 14 09:31:38.735591 systemd[1]: Started sshd@20-172.24.4.30:22-172.24.4.1:52480.service - OpenSSH per-connection server daemon (172.24.4.1:52480). May 14 09:31:38.738224 systemd-logind[1506]: Removed session 22. May 14 09:31:39.863316 sshd[5371]: Accepted publickey for core from 172.24.4.1 port 52480 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:39.865571 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:39.879382 systemd-logind[1506]: New session 23 of user core. May 14 09:31:39.887672 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 09:31:40.310328 containerd[1530]: time="2025-05-14T09:31:40.310138914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"e80548e89c18482009c575ecce7f2f0e5a6541a68f3dbdad210ce1d0d2567790\" pid:5386 exited_at:{seconds:1747215100 nanos:308726005}" May 14 09:31:40.555488 sshd[5373]: Connection closed by 172.24.4.1 port 52480 May 14 09:31:40.556585 sshd-session[5371]: pam_unix(sshd:session): session closed for user core May 14 09:31:40.563726 systemd[1]: sshd@20-172.24.4.30:22-172.24.4.1:52480.service: Deactivated successfully. May 14 09:31:40.569203 systemd[1]: session-23.scope: Deactivated successfully. May 14 09:31:40.574222 systemd-logind[1506]: Session 23 logged out. Waiting for processes to exit. May 14 09:31:40.577823 systemd-logind[1506]: Removed session 23. May 14 09:31:41.998423 containerd[1530]: time="2025-05-14T09:31:41.998174108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"de61056626fd5494a6b1fa3fbcbe80ce104c8b6ba7e0677f5cff4c50123bd79a\" pid:5418 exited_at:{seconds:1747215101 nanos:997410846}" May 14 09:31:45.582798 systemd[1]: Started sshd@21-172.24.4.30:22-172.24.4.1:33622.service - OpenSSH per-connection server daemon (172.24.4.1:33622). May 14 09:31:46.705317 sshd[5430]: Accepted publickey for core from 172.24.4.1 port 33622 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:46.709854 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:46.725369 systemd-logind[1506]: New session 24 of user core. May 14 09:31:46.735775 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 09:31:47.341855 sshd[5432]: Connection closed by 172.24.4.1 port 33622 May 14 09:31:47.341682 sshd-session[5430]: pam_unix(sshd:session): session closed for user core May 14 09:31:47.349776 systemd[1]: sshd@21-172.24.4.30:22-172.24.4.1:33622.service: Deactivated successfully. May 14 09:31:47.352878 systemd[1]: session-24.scope: Deactivated successfully. May 14 09:31:47.355804 systemd-logind[1506]: Session 24 logged out. Waiting for processes to exit. May 14 09:31:47.358675 systemd-logind[1506]: Removed session 24. May 14 09:31:52.362454 systemd[1]: Started sshd@22-172.24.4.30:22-172.24.4.1:33626.service - OpenSSH per-connection server daemon (172.24.4.1:33626). May 14 09:31:53.552022 sshd[5444]: Accepted publickey for core from 172.24.4.1 port 33626 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:31:53.554001 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:31:53.568568 systemd-logind[1506]: New session 25 of user core. May 14 09:31:53.576394 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 09:31:54.333509 sshd[5446]: Connection closed by 172.24.4.1 port 33626 May 14 09:31:54.334909 sshd-session[5444]: pam_unix(sshd:session): session closed for user core May 14 09:31:54.347776 systemd[1]: sshd@22-172.24.4.30:22-172.24.4.1:33626.service: Deactivated successfully. May 14 09:31:54.355829 systemd[1]: session-25.scope: Deactivated successfully. May 14 09:31:54.359081 systemd-logind[1506]: Session 25 logged out. Waiting for processes to exit. May 14 09:31:54.363579 systemd-logind[1506]: Removed session 25. May 14 09:31:59.357923 systemd[1]: Started sshd@23-172.24.4.30:22-172.24.4.1:37124.service - OpenSSH per-connection server daemon (172.24.4.1:37124). May 14 09:32:00.444792 containerd[1530]: time="2025-05-14T09:32:00.444537599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"5700c615ebdd2c93722f9ddd8ac846152f1c0298e1646dbf079d31caaa4d0a1e\" pid:5473 exited_at:{seconds:1747215120 nanos:441767183}" May 14 09:32:00.545575 sshd[5460]: Accepted publickey for core from 172.24.4.1 port 37124 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:32:00.549765 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:32:00.569197 systemd-logind[1506]: New session 26 of user core. May 14 09:32:00.583602 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 09:32:01.472288 sshd[5488]: Connection closed by 172.24.4.1 port 37124 May 14 09:32:01.474104 sshd-session[5460]: pam_unix(sshd:session): session closed for user core May 14 09:32:01.487751 systemd-logind[1506]: Session 26 logged out. Waiting for processes to exit. May 14 09:32:01.489463 systemd[1]: sshd@23-172.24.4.30:22-172.24.4.1:37124.service: Deactivated successfully. May 14 09:32:01.496161 systemd[1]: session-26.scope: Deactivated successfully. May 14 09:32:01.503340 systemd-logind[1506]: Removed session 26. May 14 09:32:06.497700 systemd[1]: Started sshd@24-172.24.4.30:22-172.24.4.1:57884.service - OpenSSH per-connection server daemon (172.24.4.1:57884). May 14 09:32:07.663903 sshd[5500]: Accepted publickey for core from 172.24.4.1 port 57884 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:32:07.666692 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:32:07.680720 systemd-logind[1506]: New session 27 of user core. May 14 09:32:07.689626 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 09:32:08.436704 sshd[5502]: Connection closed by 172.24.4.1 port 57884 May 14 09:32:08.438542 sshd-session[5500]: pam_unix(sshd:session): session closed for user core May 14 09:32:08.462583 systemd[1]: sshd@24-172.24.4.30:22-172.24.4.1:57884.service: Deactivated successfully. May 14 09:32:08.473934 systemd[1]: session-27.scope: Deactivated successfully. May 14 09:32:08.478644 systemd-logind[1506]: Session 27 logged out. Waiting for processes to exit. May 14 09:32:08.483851 systemd-logind[1506]: Removed session 27. May 14 09:32:10.295518 containerd[1530]: time="2025-05-14T09:32:10.295263611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"6e12e777e75b541f7842867b7803aeed7dac99af92341e61dfc0aac07f186a33\" pid:5525 exited_at:{seconds:1747215130 nanos:294925126}" May 14 09:32:13.470462 systemd[1]: Started sshd@25-172.24.4.30:22-172.24.4.1:57900.service - OpenSSH per-connection server daemon (172.24.4.1:57900). May 14 09:32:14.582679 sshd[5536]: Accepted publickey for core from 172.24.4.1 port 57900 ssh2: RSA SHA256:BKDYMT/WUwUVfZ9W+4uaDj5K5JCUaLCbe7Z43TbPzLc May 14 09:32:14.587025 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 09:32:14.609059 systemd-logind[1506]: New session 28 of user core. May 14 09:32:14.615660 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 09:32:14.921571 containerd[1530]: time="2025-05-14T09:32:14.921155562Z" level=warning msg="container event discarded" container=7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee type=CONTAINER_CREATED_EVENT May 14 09:32:14.933861 containerd[1530]: time="2025-05-14T09:32:14.933698450Z" level=warning msg="container event discarded" container=7c7f7ab4ef162f1c1de1b16f27520a72092af2d560123a17ed83e25dba89a3ee type=CONTAINER_STARTED_EVENT May 14 09:32:14.987114 containerd[1530]: time="2025-05-14T09:32:14.987003343Z" level=warning msg="container event discarded" container=4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe type=CONTAINER_CREATED_EVENT May 14 09:32:15.036534 containerd[1530]: time="2025-05-14T09:32:15.036407317Z" level=warning msg="container event discarded" container=c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805 type=CONTAINER_CREATED_EVENT May 14 09:32:15.036742 containerd[1530]: time="2025-05-14T09:32:15.036511032Z" level=warning msg="container event discarded" container=c20c46fad1639e58a5ab9823a345e418f8f97b2f0ef07ad8f8cb78ab659ba805 type=CONTAINER_STARTED_EVENT May 14 09:32:15.079136 containerd[1530]: time="2025-05-14T09:32:15.078919244Z" level=warning msg="container event discarded" container=d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1 type=CONTAINER_CREATED_EVENT May 14 09:32:15.079136 containerd[1530]: time="2025-05-14T09:32:15.079062523Z" level=warning msg="container event discarded" container=d576a8eb08897792e983e25f051ed7ded4d16e009897d343a16f7cd8ea1b31f1 type=CONTAINER_STARTED_EVENT May 14 09:32:15.092450 containerd[1530]: time="2025-05-14T09:32:15.092360328Z" level=warning msg="container event discarded" container=25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a type=CONTAINER_CREATED_EVENT May 14 09:32:15.123852 containerd[1530]: time="2025-05-14T09:32:15.123710686Z" level=warning msg="container event discarded" container=baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3 type=CONTAINER_CREATED_EVENT May 14 09:32:15.152331 containerd[1530]: time="2025-05-14T09:32:15.152198595Z" level=warning msg="container event discarded" container=4735acc88f6f8f00364dfcbfb87f9689ad77b85866646932d16ff9427c2c1ffe type=CONTAINER_STARTED_EVENT May 14 09:32:15.278452 containerd[1530]: time="2025-05-14T09:32:15.278110880Z" level=warning msg="container event discarded" container=25236969a22a387b5cde84620e3fe8c63f4a0b17bec05676abc73c160a4c5b0a type=CONTAINER_STARTED_EVENT May 14 09:32:15.339825 containerd[1530]: time="2025-05-14T09:32:15.339652974Z" level=warning msg="container event discarded" container=baff927201f6dcd3f4c583037124c3fbfe4dd360bd0599813909a6f8105d49c3 type=CONTAINER_STARTED_EVENT May 14 09:32:15.358990 sshd[5538]: Connection closed by 172.24.4.1 port 57900 May 14 09:32:15.360289 sshd-session[5536]: pam_unix(sshd:session): session closed for user core May 14 09:32:15.370564 systemd[1]: sshd@25-172.24.4.30:22-172.24.4.1:57900.service: Deactivated successfully. May 14 09:32:15.377674 systemd[1]: session-28.scope: Deactivated successfully. May 14 09:32:15.383418 systemd-logind[1506]: Session 28 logged out. Waiting for processes to exit. May 14 09:32:15.390499 systemd-logind[1506]: Removed session 28. May 14 09:32:26.251457 containerd[1530]: time="2025-05-14T09:32:26.250867629Z" level=warning msg="container event discarded" container=411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab type=CONTAINER_CREATED_EVENT May 14 09:32:26.254553 containerd[1530]: time="2025-05-14T09:32:26.253323006Z" level=warning msg="container event discarded" container=411da20bfbaf1a17a4fd44cdd577db29025be7b3e7dc8bbf89f6538bfaa8bcab type=CONTAINER_STARTED_EVENT May 14 09:32:26.334544 containerd[1530]: time="2025-05-14T09:32:26.334443781Z" level=warning msg="container event discarded" container=e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2 type=CONTAINER_CREATED_EVENT May 14 09:32:26.438833 containerd[1530]: time="2025-05-14T09:32:26.438740385Z" level=warning msg="container event discarded" container=e021d55abd39729d0462e1a8aa94c4bc8f1afb5ae485c802cd622302d3075bb2 type=CONTAINER_STARTED_EVENT May 14 09:32:26.631894 containerd[1530]: time="2025-05-14T09:32:26.631544954Z" level=warning msg="container event discarded" container=58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1 type=CONTAINER_CREATED_EVENT May 14 09:32:26.632411 containerd[1530]: time="2025-05-14T09:32:26.632170227Z" level=warning msg="container event discarded" container=58cfc85ba2531d2174ce398c05532b6aea52da432964c933907d34c581de95a1 type=CONTAINER_STARTED_EVENT May 14 09:32:29.435706 containerd[1530]: time="2025-05-14T09:32:29.435579422Z" level=warning msg="container event discarded" container=f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d type=CONTAINER_CREATED_EVENT May 14 09:32:29.520678 containerd[1530]: time="2025-05-14T09:32:29.520349727Z" level=warning msg="container event discarded" container=f74e832d5be222a7d5e9c24d7e90834887e1485b955b19c03e712d91c7601c0d type=CONTAINER_STARTED_EVENT May 14 09:32:30.446973 containerd[1530]: time="2025-05-14T09:32:30.446825307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"7aab6e5eacc871898914ee13fdd1cd0fc1d5f1f8d7693ce20daaa6b739e3ccf8\" pid:5564 exited_at:{seconds:1747215150 nanos:445960265}" May 14 09:32:33.784393 containerd[1530]: time="2025-05-14T09:32:33.784177392Z" level=warning msg="container event discarded" container=4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19 type=CONTAINER_CREATED_EVENT May 14 09:32:33.784393 containerd[1530]: time="2025-05-14T09:32:33.784304009Z" level=warning msg="container event discarded" container=4de26af6dcf2c5a44b8ddbd8e7d6c2c8a068222f554d0ac755b555ba9b12db19 type=CONTAINER_STARTED_EVENT May 14 09:32:33.795744 containerd[1530]: time="2025-05-14T09:32:33.795627995Z" level=warning msg="container event discarded" container=58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab type=CONTAINER_CREATED_EVENT May 14 09:32:33.795744 containerd[1530]: time="2025-05-14T09:32:33.795723404Z" level=warning msg="container event discarded" container=58ac597a7b963495443abca3c9b65d5186c1991b5bb1e438e440fea32366aeab type=CONTAINER_STARTED_EVENT May 14 09:32:36.900597 containerd[1530]: time="2025-05-14T09:32:36.900471116Z" level=warning msg="container event discarded" container=b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4 type=CONTAINER_CREATED_EVENT May 14 09:32:37.040045 containerd[1530]: time="2025-05-14T09:32:37.039844635Z" level=warning msg="container event discarded" container=b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4 type=CONTAINER_STARTED_EVENT May 14 09:32:37.544985 containerd[1530]: time="2025-05-14T09:32:37.544857896Z" level=warning msg="container event discarded" container=b3564353d2af4030018a91ae6e3a28a4321c9393efd6d1059b1f82c24487b9d4 type=CONTAINER_STOPPED_EVENT May 14 09:32:40.311293 containerd[1530]: time="2025-05-14T09:32:40.311104650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"11f73ad26330716c9c317e42960c4714aecc4921bed0afeced459f92531c0109\" pid:5586 exited_at:{seconds:1747215160 nanos:310669594}" May 14 09:32:40.668935 containerd[1530]: time="2025-05-14T09:32:40.668700149Z" level=warning msg="container event discarded" container=99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a type=CONTAINER_CREATED_EVENT May 14 09:32:40.787955 containerd[1530]: time="2025-05-14T09:32:40.787699739Z" level=warning msg="container event discarded" container=99b48626bc2efac6e334af68622b61425511c20e36b28b7b5e0381b62a6b898a type=CONTAINER_STARTED_EVENT May 14 09:32:42.007387 containerd[1530]: time="2025-05-14T09:32:42.007313279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"b31142fe0adddecec5c07f600a701883ca72c18386cda45c29c550e7d655a55e\" pid:5609 exited_at:{seconds:1747215162 nanos:5951243}" May 14 09:32:47.125576 containerd[1530]: time="2025-05-14T09:32:47.125388231Z" level=warning msg="container event discarded" container=d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5 type=CONTAINER_CREATED_EVENT May 14 09:32:47.255801 containerd[1530]: time="2025-05-14T09:32:47.255676532Z" level=warning msg="container event discarded" container=d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5 type=CONTAINER_STARTED_EVENT May 14 09:32:49.716074 containerd[1530]: time="2025-05-14T09:32:49.715944073Z" level=warning msg="container event discarded" container=d82648b7fa7e79b7a1ec5b183a5006a7ad10ec250a753adb7e736f574c337aa5 type=CONTAINER_STOPPED_EVENT May 14 09:32:59.591538 containerd[1530]: time="2025-05-14T09:32:59.591154953Z" level=warning msg="container event discarded" container=b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da type=CONTAINER_CREATED_EVENT May 14 09:32:59.741347 containerd[1530]: time="2025-05-14T09:32:59.740953688Z" level=warning msg="container event discarded" container=b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da type=CONTAINER_STARTED_EVENT May 14 09:33:00.484078 containerd[1530]: time="2025-05-14T09:33:00.484017791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"002e095e468096472fc9b33777ec091cf16c502d8b8bde30a042b5a410464326\" pid:5648 exited_at:{seconds:1747215180 nanos:483297930}" May 14 09:33:03.006798 containerd[1530]: time="2025-05-14T09:33:03.006521901Z" level=warning msg="container event discarded" container=00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294 type=CONTAINER_CREATED_EVENT May 14 09:33:03.007738 containerd[1530]: time="2025-05-14T09:33:03.007393417Z" level=warning msg="container event discarded" container=00ce9a1d28076ac5a245dc7d5f8368a4f95e91647e17d08d0ba2a40fa4026294 type=CONTAINER_STARTED_EVENT May 14 09:33:03.466540 containerd[1530]: time="2025-05-14T09:33:03.466402215Z" level=warning msg="container event discarded" container=b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37 type=CONTAINER_CREATED_EVENT May 14 09:33:03.467325 containerd[1530]: time="2025-05-14T09:33:03.466680257Z" level=warning msg="container event discarded" container=b1d49ef2e9de4765c2242bf410a73c8b98dfb8246942c4ec5148e5052ec75a37 type=CONTAINER_STARTED_EVENT May 14 09:33:03.535431 containerd[1530]: time="2025-05-14T09:33:03.535062473Z" level=warning msg="container event discarded" container=a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4 type=CONTAINER_CREATED_EVENT May 14 09:33:03.536402 containerd[1530]: time="2025-05-14T09:33:03.536300266Z" level=warning msg="container event discarded" container=a525202ec2157d89c1c79a3916dc4c7d40078c7fc310a618c895f44cd90858f4 type=CONTAINER_STARTED_EVENT May 14 09:33:03.588803 containerd[1530]: time="2025-05-14T09:33:03.588636462Z" level=warning msg="container event discarded" container=809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13 type=CONTAINER_CREATED_EVENT May 14 09:33:03.672277 containerd[1530]: time="2025-05-14T09:33:03.672078998Z" level=warning msg="container event discarded" container=809e6af81af509681f4de4acae6e5af0474dd40d3b393dcf45bc004cfe76dd13 type=CONTAINER_STARTED_EVENT May 14 09:33:04.337727 containerd[1530]: time="2025-05-14T09:33:04.337524959Z" level=warning msg="container event discarded" container=d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62 type=CONTAINER_CREATED_EVENT May 14 09:33:04.338724 containerd[1530]: time="2025-05-14T09:33:04.338064712Z" level=warning msg="container event discarded" container=d63bfc9e114d26e62e6f1b9b9b10ec1ef5d7e22c52ef3031c11826fb2fc4ba62 type=CONTAINER_STARTED_EVENT May 14 09:33:05.529790 containerd[1530]: time="2025-05-14T09:33:05.529630333Z" level=warning msg="container event discarded" container=eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f type=CONTAINER_CREATED_EVENT May 14 09:33:05.529790 containerd[1530]: time="2025-05-14T09:33:05.529759456Z" level=warning msg="container event discarded" container=eb1e98e768eb67e71db7499fd0539dfb178d9dc976c432667408c76dc4a7fd8f type=CONTAINER_STARTED_EVENT May 14 09:33:05.643516 containerd[1530]: time="2025-05-14T09:33:05.643356267Z" level=warning msg="container event discarded" container=14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f type=CONTAINER_CREATED_EVENT May 14 09:33:05.643516 containerd[1530]: time="2025-05-14T09:33:05.643490298Z" level=warning msg="container event discarded" container=14b3429828060e2538fa4799d68a98c98a98eb40ca49247350a529fb0cb34c5f type=CONTAINER_STARTED_EVENT May 14 09:33:05.721306 containerd[1530]: time="2025-05-14T09:33:05.721139285Z" level=warning msg="container event discarded" container=667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d type=CONTAINER_CREATED_EVENT May 14 09:33:05.849753 containerd[1530]: time="2025-05-14T09:33:05.848177758Z" level=warning msg="container event discarded" container=667398ddd19f5f9887c5ae8bff6a74a406085f20ca04ca8434cd05bdbd55f35d type=CONTAINER_STARTED_EVENT May 14 09:33:08.310864 containerd[1530]: time="2025-05-14T09:33:08.310710136Z" level=warning msg="container event discarded" container=9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808 type=CONTAINER_CREATED_EVENT May 14 09:33:08.456602 containerd[1530]: time="2025-05-14T09:33:08.456334570Z" level=warning msg="container event discarded" container=9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808 type=CONTAINER_STARTED_EVENT May 14 09:33:10.298176 containerd[1530]: time="2025-05-14T09:33:10.298105901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9663e17eca181261b165e692dbc0c5bc87b1b882c55e863a6602428699d34808\" id:\"c69422ca8780bb0dfead10852f52df506ed5f4a59293d8dcd08ebdf1e776c045\" pid:5672 exited_at:{seconds:1747215190 nanos:297126393}" May 14 09:33:12.501554 containerd[1530]: time="2025-05-14T09:33:12.501410436Z" level=warning msg="container event discarded" container=d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2 type=CONTAINER_CREATED_EVENT May 14 09:33:12.615273 containerd[1530]: time="2025-05-14T09:33:12.615027540Z" level=warning msg="container event discarded" container=d92d4ce893c3fc3b63b8802b65768b681f3fe7f3919da94cc57dde6aa02a0fd2 type=CONTAINER_STARTED_EVENT May 14 09:33:15.093735 containerd[1530]: time="2025-05-14T09:33:15.093540618Z" level=warning msg="container event discarded" container=7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a type=CONTAINER_CREATED_EVENT May 14 09:33:15.207765 containerd[1530]: time="2025-05-14T09:33:15.207658355Z" level=warning msg="container event discarded" container=7509cb15a99169d17ed7c89da42989a85ff69012d5a98c0cce3dfd6dbff95a3a type=CONTAINER_STARTED_EVENT May 14 09:33:15.681809 containerd[1530]: time="2025-05-14T09:33:15.681572535Z" level=warning msg="container event discarded" container=429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399 type=CONTAINER_CREATED_EVENT May 14 09:33:15.817449 containerd[1530]: time="2025-05-14T09:33:15.817301306Z" level=warning msg="container event discarded" container=429f59ee3b5177357da3adf4b6f078d88033a620620c9b7cb676b21bccbac399 type=CONTAINER_STARTED_EVENT May 14 09:33:18.295142 containerd[1530]: time="2025-05-14T09:33:18.294921332Z" level=warning msg="container event discarded" container=bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4 type=CONTAINER_CREATED_EVENT May 14 09:33:18.388145 containerd[1530]: time="2025-05-14T09:33:18.387824279Z" level=warning msg="container event discarded" container=bd00c3bb1bed14dac60f19364a124e985dfeca00068f3f79c012f4c731c359f4 type=CONTAINER_STARTED_EVENT May 14 09:33:25.526517 update_engine[1508]: I20250514 09:33:25.525125 1508 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 09:33:25.526517 update_engine[1508]: I20250514 09:33:25.525655 1508 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 09:33:25.533588 update_engine[1508]: I20250514 09:33:25.527479 1508 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 09:33:25.533588 update_engine[1508]: I20250514 09:33:25.530660 1508 omaha_request_params.cc:62] Current group set to developer May 14 09:33:25.533588 update_engine[1508]: I20250514 09:33:25.532056 1508 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 09:33:25.535577 update_engine[1508]: I20250514 09:33:25.532089 1508 update_attempter.cc:643] Scheduling an action processor start. May 14 09:33:25.535741 update_engine[1508]: I20250514 09:33:25.535636 1508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 09:33:25.535946 update_engine[1508]: I20250514 09:33:25.535872 1508 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 09:33:25.536169 update_engine[1508]: I20250514 09:33:25.536119 1508 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 09:33:25.536169 update_engine[1508]: I20250514 09:33:25.536157 1508 omaha_request_action.cc:272] Request: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.536169 update_engine[1508]: May 14 09:33:25.537084 update_engine[1508]: I20250514 09:33:25.536182 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 09:33:25.562730 update_engine[1508]: I20250514 09:33:25.562108 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 09:33:25.562962 locksmithd[1537]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 09:33:25.565075 update_engine[1508]: I20250514 09:33:25.563843 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 09:33:25.572001 update_engine[1508]: E20250514 09:33:25.571871 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 09:33:25.572168 update_engine[1508]: I20250514 09:33:25.572094 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 09:33:30.465780 containerd[1530]: time="2025-05-14T09:33:30.465569031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0715b82efb28f07ea95ca4c8b2585cadf88fa5a256beb275a74aa5bdb2531da\" id:\"dc35ef6eb8b4290f7eace77f9d9a98098d7a2eb2946d0a48b5af6353b89ed980\" pid:5706 exited_at:{seconds:1747215210 nanos:464846625}" May 14 09:33:35.457169 update_engine[1508]: I20250514 09:33:35.456880 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 09:33:35.458295 update_engine[1508]: I20250514 09:33:35.457844 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 09:33:35.460934 update_engine[1508]: I20250514 09:33:35.460850 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 09:33:35.466712 update_engine[1508]: E20250514 09:33:35.466547 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 09:33:35.466870 update_engine[1508]: I20250514 09:33:35.466717 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 2