Apr 23 23:56:09.859388 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 23 23:56:09.859413 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 23 23:56:09.859422 kernel: BIOS-provided physical RAM map: Apr 23 23:56:09.859430 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 23 23:56:09.859439 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 23 23:56:09.859446 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 23 23:56:09.859453 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 23 23:56:09.859461 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 23 23:56:09.859468 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 23 23:56:09.859475 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 23 23:56:09.859483 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 23 23:56:09.859490 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 23 23:56:09.859497 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 23 23:56:09.859507 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 23 23:56:09.859515 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 23 23:56:09.859523 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 23 23:56:09.859530 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 23 23:56:09.859537 kernel: NX (Execute Disable) protection: active Apr 23 23:56:09.859547 kernel: APIC: Static calls initialized Apr 23 23:56:09.859555 kernel: e820: update [mem 0x7dfae018-0x7dfb7a57] usable ==> usable Apr 23 23:56:09.859562 kernel: e820: update [mem 0x7df72018-0x7dfad657] usable ==> usable Apr 23 23:56:09.859570 kernel: e820: update [mem 0x7df36018-0x7df71657] usable ==> usable Apr 23 23:56:09.859577 kernel: extended physical RAM map: Apr 23 23:56:09.859585 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 23 23:56:09.859592 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007df36017] usable Apr 23 23:56:09.859599 kernel: reserve setup_data: [mem 0x000000007df36018-0x000000007df71657] usable Apr 23 23:56:09.859607 kernel: reserve setup_data: [mem 0x000000007df71658-0x000000007df72017] usable Apr 23 23:56:09.859614 kernel: reserve setup_data: [mem 0x000000007df72018-0x000000007dfad657] usable Apr 23 23:56:09.859621 kernel: reserve setup_data: [mem 0x000000007dfad658-0x000000007dfae017] usable Apr 23 23:56:09.859631 kernel: reserve setup_data: [mem 0x000000007dfae018-0x000000007dfb7a57] usable Apr 23 23:56:09.859646 kernel: reserve setup_data: [mem 0x000000007dfb7a58-0x000000007ed3efff] usable Apr 23 23:56:09.859670 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 23 23:56:09.859697 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 23 23:56:09.859705 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 23 23:56:09.859712 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 23 23:56:09.859720 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 23 23:56:09.859727 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 23 23:56:09.859734 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 23 23:56:09.859742 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 23 23:56:09.859749 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 23 23:56:09.859763 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 23 23:56:09.859771 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 23 23:56:09.859779 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 23 23:56:09.859787 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 23 23:56:09.859795 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 RNG=0x7fb73018 Apr 23 23:56:09.859806 kernel: random: crng init done Apr 23 23:56:09.859814 kernel: efi: Remove mem136: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 23 23:56:09.859822 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 23 23:56:09.859829 kernel: secureboot: Secure boot disabled Apr 23 23:56:09.859837 kernel: SMBIOS 3.0.0 present. Apr 23 23:56:09.859845 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 23 23:56:09.859853 kernel: DMI: Memory slots populated: 1/1 Apr 23 23:56:09.859860 kernel: Hypervisor detected: KVM Apr 23 23:56:09.859875 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 23 23:56:09.860952 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 23 23:56:09.860961 kernel: kvm-clock: using sched offset of 13648944004 cycles Apr 23 23:56:09.860974 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 23 23:56:09.860983 kernel: tsc: Detected 2399.998 MHz processor Apr 23 23:56:09.860991 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 23 23:56:09.860999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 23 23:56:09.861007 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 23 23:56:09.861016 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 23 23:56:09.861024 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 23 23:56:09.861032 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 23 23:56:09.861040 kernel: Using GB pages for direct mapping Apr 23 23:56:09.861050 kernel: ACPI: Early table checksum verification disabled Apr 23 23:56:09.861058 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 23 23:56:09.861067 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 23 23:56:09.861075 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861083 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861091 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 23 23:56:09.861099 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861107 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861115 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861125 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:56:09.861133 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 23 23:56:09.861141 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 23 23:56:09.861150 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 23 23:56:09.861158 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 23 23:56:09.861166 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 23 23:56:09.861173 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 23 23:56:09.861181 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 23 23:56:09.861189 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 23 23:56:09.861199 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 23 23:56:09.861208 kernel: No NUMA configuration found Apr 23 23:56:09.861216 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 23 23:56:09.861224 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Apr 23 23:56:09.861232 kernel: Zone ranges: Apr 23 23:56:09.861240 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 23 23:56:09.861248 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 23 23:56:09.861256 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 23 23:56:09.861264 kernel: Device empty Apr 23 23:56:09.861272 kernel: Movable zone start for each node Apr 23 23:56:09.861282 kernel: Early memory node ranges Apr 23 23:56:09.861290 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 23 23:56:09.861298 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 23 23:56:09.861306 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 23 23:56:09.861314 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 23 23:56:09.861321 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 23 23:56:09.861329 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 23 23:56:09.861337 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 23 23:56:09.861345 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 23 23:56:09.861356 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 23 23:56:09.861364 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 23 23:56:09.861372 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 23 23:56:09.861379 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 23 23:56:09.861387 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 23 23:56:09.861395 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 23 23:56:09.861403 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 23 23:56:09.861412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 23 23:56:09.861419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 23 23:56:09.861430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 23 23:56:09.861437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 23 23:56:09.861446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 23 23:56:09.861454 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 23 23:56:09.861462 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 23 23:56:09.861470 kernel: CPU topo: Max. logical packages: 1 Apr 23 23:56:09.861478 kernel: CPU topo: Max. logical dies: 1 Apr 23 23:56:09.861497 kernel: CPU topo: Max. dies per package: 1 Apr 23 23:56:09.861506 kernel: CPU topo: Max. threads per core: 1 Apr 23 23:56:09.861514 kernel: CPU topo: Num. cores per package: 2 Apr 23 23:56:09.861522 kernel: CPU topo: Num. threads per package: 2 Apr 23 23:56:09.861530 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 23 23:56:09.861541 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 23 23:56:09.861550 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 23 23:56:09.861558 kernel: Booting paravirtualized kernel on KVM Apr 23 23:56:09.861567 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 23 23:56:09.861575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 23 23:56:09.861586 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 23 23:56:09.861595 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 23 23:56:09.861603 kernel: pcpu-alloc: [0] 0 1 Apr 23 23:56:09.861611 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 23 23:56:09.861621 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 23 23:56:09.861629 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 23 23:56:09.861638 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 23 23:56:09.861646 kernel: Fallback order for Node 0: 0 Apr 23 23:56:09.861657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Apr 23 23:56:09.861665 kernel: Policy zone: Normal Apr 23 23:56:09.861673 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 23 23:56:09.861682 kernel: software IO TLB: area num 2. Apr 23 23:56:09.861690 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 23 23:56:09.861698 kernel: ftrace: allocating 40126 entries in 157 pages Apr 23 23:56:09.861707 kernel: ftrace: allocated 157 pages with 5 groups Apr 23 23:56:09.861715 kernel: Dynamic Preempt: voluntary Apr 23 23:56:09.861724 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 23 23:56:09.861735 kernel: rcu: RCU event tracing is enabled. Apr 23 23:56:09.861744 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 23 23:56:09.861752 kernel: Trampoline variant of Tasks RCU enabled. Apr 23 23:56:09.861761 kernel: Rude variant of Tasks RCU enabled. Apr 23 23:56:09.861769 kernel: Tracing variant of Tasks RCU enabled. Apr 23 23:56:09.861777 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 23 23:56:09.861786 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 23 23:56:09.861794 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:56:09.861803 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:56:09.861814 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:56:09.861822 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 23 23:56:09.861831 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 23 23:56:09.861839 kernel: Console: colour dummy device 80x25 Apr 23 23:56:09.861847 kernel: printk: legacy console [tty0] enabled Apr 23 23:56:09.861856 kernel: printk: legacy console [ttyS0] enabled Apr 23 23:56:09.861864 kernel: ACPI: Core revision 20240827 Apr 23 23:56:09.861872 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 23 23:56:09.861905 kernel: APIC: Switch to symmetric I/O mode setup Apr 23 23:56:09.862442 kernel: x2apic enabled Apr 23 23:56:09.862457 kernel: APIC: Switched APIC routing to: physical x2apic Apr 23 23:56:09.862465 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 23 23:56:09.862474 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Apr 23 23:56:09.862483 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Apr 23 23:56:09.862491 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 23 23:56:09.862500 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 23 23:56:09.862508 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 23 23:56:09.862517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 23 23:56:09.862547 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 23 23:56:09.862556 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 23 23:56:09.862565 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 23 23:56:09.862573 kernel: active return thunk: srso_alias_return_thunk Apr 23 23:56:09.862581 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 23 23:56:09.862590 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 23 23:56:09.862598 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 23 23:56:09.862607 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 23 23:56:09.862621 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 23 23:56:09.862632 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 23 23:56:09.862640 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 23 23:56:09.862648 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 23 23:56:09.862657 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 23 23:56:09.862666 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 23 23:56:09.862674 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 23 23:56:09.862683 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 23 23:56:09.862691 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 23 23:56:09.862699 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 23 23:56:09.862711 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 23 23:56:09.862719 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 23 23:56:09.862727 kernel: Freeing SMP alternatives memory: 32K Apr 23 23:56:09.862736 kernel: pid_max: default: 32768 minimum: 301 Apr 23 23:56:09.862744 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 23 23:56:09.862752 kernel: landlock: Up and running. Apr 23 23:56:09.862760 kernel: SELinux: Initializing. Apr 23 23:56:09.862769 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:56:09.862777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:56:09.862789 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 23 23:56:09.862797 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 23 23:56:09.862805 kernel: ... version: 0 Apr 23 23:56:09.862813 kernel: ... bit width: 48 Apr 23 23:56:09.862822 kernel: ... generic registers: 6 Apr 23 23:56:09.862830 kernel: ... value mask: 0000ffffffffffff Apr 23 23:56:09.862838 kernel: ... max period: 00007fffffffffff Apr 23 23:56:09.862847 kernel: ... fixed-purpose events: 0 Apr 23 23:56:09.862855 kernel: ... event mask: 000000000000003f Apr 23 23:56:09.862866 kernel: signal: max sigframe size: 3376 Apr 23 23:56:09.862874 kernel: rcu: Hierarchical SRCU implementation. Apr 23 23:56:09.863922 kernel: rcu: Max phase no-delay instances is 400. Apr 23 23:56:09.863932 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 23 23:56:09.863941 kernel: smp: Bringing up secondary CPUs ... Apr 23 23:56:09.863949 kernel: smpboot: x86: Booting SMP configuration: Apr 23 23:56:09.863958 kernel: .... node #0, CPUs: #1 Apr 23 23:56:09.863966 kernel: smp: Brought up 1 node, 2 CPUs Apr 23 23:56:09.863975 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Apr 23 23:56:09.863988 kernel: Memory: 3813688K/4091168K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 271836K reserved, 0K cma-reserved) Apr 23 23:56:09.863996 kernel: devtmpfs: initialized Apr 23 23:56:09.864314 kernel: x86/mm: Memory block size: 128MB Apr 23 23:56:09.864326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 23 23:56:09.864335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 23 23:56:09.864344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 23 23:56:09.864352 kernel: pinctrl core: initialized pinctrl subsystem Apr 23 23:56:09.864360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 23 23:56:09.864369 kernel: audit: initializing netlink subsys (disabled) Apr 23 23:56:09.864381 kernel: audit: type=2000 audit(1776988566.693:1): state=initialized audit_enabled=0 res=1 Apr 23 23:56:09.864389 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 23 23:56:09.864397 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 23 23:56:09.864406 kernel: cpuidle: using governor menu Apr 23 23:56:09.864414 kernel: efi: Freeing EFI boot services memory: 34820K Apr 23 23:56:09.864422 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 23 23:56:09.864431 kernel: dca service started, version 1.12.1 Apr 23 23:56:09.864439 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 23 23:56:09.864447 kernel: PCI: Using configuration type 1 for base access Apr 23 23:56:09.864458 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 23 23:56:09.864467 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 23 23:56:09.864475 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 23 23:56:09.864484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 23 23:56:09.864492 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 23 23:56:09.864500 kernel: ACPI: Added _OSI(Module Device) Apr 23 23:56:09.864509 kernel: ACPI: Added _OSI(Processor Device) Apr 23 23:56:09.864517 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 23 23:56:09.864525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 23 23:56:09.864537 kernel: ACPI: Interpreter enabled Apr 23 23:56:09.864545 kernel: ACPI: PM: (supports S0 S5) Apr 23 23:56:09.864553 kernel: ACPI: Using IOAPIC for interrupt routing Apr 23 23:56:09.864562 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 23 23:56:09.864570 kernel: PCI: Using E820 reservations for host bridge windows Apr 23 23:56:09.864578 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 23 23:56:09.864587 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 23 23:56:09.864776 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 23 23:56:09.864974 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 23 23:56:09.865109 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 23 23:56:09.865120 kernel: PCI host bridge to bus 0000:00 Apr 23 23:56:09.865252 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 23 23:56:09.865369 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 23 23:56:09.865485 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 23 23:56:09.865610 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 23 23:56:09.869511 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 23 23:56:09.869642 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 23 23:56:09.869760 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 23 23:56:09.869997 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 23 23:56:09.870153 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Apr 23 23:56:09.870296 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Apr 23 23:56:09.870434 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Apr 23 23:56:09.870560 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Apr 23 23:56:09.870687 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 23 23:56:09.870814 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 23 23:56:09.871052 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.871186 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Apr 23 23:56:09.871313 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:56:09.871441 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 23 23:56:09.871572 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 23 23:56:09.871716 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.871844 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Apr 23 23:56:09.872015 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:56:09.872141 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 23 23:56:09.872278 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.872418 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Apr 23 23:56:09.872545 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:56:09.872669 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 23 23:56:09.872793 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 23 23:56:09.872972 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.873114 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Apr 23 23:56:09.873241 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:56:09.873372 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 23 23:56:09.873507 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.873633 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Apr 23 23:56:09.873767 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:56:09.873933 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 23 23:56:09.874061 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 23 23:56:09.874193 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.874325 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Apr 23 23:56:09.874461 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:56:09.874590 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 23 23:56:09.874715 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 23 23:56:09.874846 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.876041 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Apr 23 23:56:09.876179 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:56:09.876313 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 23 23:56:09.876439 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 23 23:56:09.876584 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.876715 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Apr 23 23:56:09.876840 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:56:09.876994 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 23 23:56:09.877127 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 23 23:56:09.877273 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:56:09.877403 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Apr 23 23:56:09.877528 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:56:09.877654 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 23 23:56:09.877779 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 23 23:56:09.878992 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 23 23:56:09.879137 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 23 23:56:09.879282 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 23 23:56:09.879411 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Apr 23 23:56:09.879535 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Apr 23 23:56:09.879669 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 23 23:56:09.879798 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Apr 23 23:56:09.879985 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:56:09.880128 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Apr 23 23:56:09.880259 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Apr 23 23:56:09.880392 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:56:09.880523 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:56:09.880670 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 23 23:56:09.882121 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Apr 23 23:56:09.882268 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:56:09.882410 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 23 23:56:09.882551 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Apr 23 23:56:09.882692 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Apr 23 23:56:09.882824 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:56:09.882996 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:56:09.883129 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Apr 23 23:56:09.883270 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:56:09.883412 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:56:09.883542 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Apr 23 23:56:09.883673 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Apr 23 23:56:09.883797 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:56:09.884594 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 23 23:56:09.884739 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Apr 23 23:56:09.884902 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Apr 23 23:56:09.885033 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:56:09.885043 kernel: acpiphp: Slot [0] registered Apr 23 23:56:09.885189 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:56:09.885330 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Apr 23 23:56:09.885464 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Apr 23 23:56:09.885594 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:56:09.885724 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:56:09.885734 kernel: acpiphp: Slot [0-2] registered Apr 23 23:56:09.885863 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:56:09.885875 kernel: acpiphp: Slot [0-3] registered Apr 23 23:56:09.886038 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:56:09.886072 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 23 23:56:09.886082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 23 23:56:09.886091 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 23 23:56:09.886102 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 23 23:56:09.886111 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 23 23:56:09.886120 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 23 23:56:09.886129 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 23 23:56:09.886138 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 23 23:56:09.886146 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 23 23:56:09.886155 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 23 23:56:09.886164 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 23 23:56:09.886173 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 23 23:56:09.886184 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 23 23:56:09.886193 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 23 23:56:09.886205 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 23 23:56:09.886214 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 23 23:56:09.886222 kernel: iommu: Default domain type: Translated Apr 23 23:56:09.886231 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 23 23:56:09.886242 kernel: efivars: Registered efivars operations Apr 23 23:56:09.886251 kernel: PCI: Using ACPI for IRQ routing Apr 23 23:56:09.886260 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 23 23:56:09.886269 kernel: e820: reserve RAM buffer [mem 0x7df36018-0x7fffffff] Apr 23 23:56:09.886278 kernel: e820: reserve RAM buffer [mem 0x7df72018-0x7fffffff] Apr 23 23:56:09.886286 kernel: e820: reserve RAM buffer [mem 0x7dfae018-0x7fffffff] Apr 23 23:56:09.886295 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 23 23:56:09.886303 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 23 23:56:09.886312 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 23 23:56:09.886323 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 23 23:56:09.886455 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 23 23:56:09.886592 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 23 23:56:09.886723 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 23 23:56:09.886734 kernel: vgaarb: loaded Apr 23 23:56:09.886743 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 23 23:56:09.886752 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 23 23:56:09.886761 kernel: clocksource: Switched to clocksource kvm-clock Apr 23 23:56:09.886773 kernel: VFS: Disk quotas dquot_6.6.0 Apr 23 23:56:09.886782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 23 23:56:09.886791 kernel: pnp: PnP ACPI init Apr 23 23:56:09.886962 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 23 23:56:09.886975 kernel: pnp: PnP ACPI: found 5 devices Apr 23 23:56:09.886984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 23 23:56:09.886993 kernel: NET: Registered PF_INET protocol family Apr 23 23:56:09.887002 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 23 23:56:09.887015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 23 23:56:09.887024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 23 23:56:09.887035 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 23 23:56:09.887044 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 23 23:56:09.887053 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 23 23:56:09.887062 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:56:09.887071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:56:09.887080 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 23 23:56:09.887088 kernel: NET: Registered PF_XDP protocol family Apr 23 23:56:09.887282 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 23 23:56:09.887425 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 23 23:56:09.887553 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 23 23:56:09.887681 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 23 23:56:09.887808 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 23 23:56:09.887980 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Apr 23 23:56:09.888112 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Apr 23 23:56:09.888238 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Apr 23 23:56:09.888376 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Apr 23 23:56:09.888506 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:56:09.888639 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 23 23:56:09.888768 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 23 23:56:09.888933 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:56:09.889063 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 23 23:56:09.889194 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:56:09.889328 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 23 23:56:09.889456 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 23 23:56:09.889585 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:56:09.889710 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 23 23:56:09.889836 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:56:09.890358 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 23 23:56:09.890494 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 23 23:56:09.890629 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:56:09.890765 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 23 23:56:09.890923 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 23 23:56:09.891060 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Apr 23 23:56:09.891192 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:56:09.891320 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 23 23:56:09.891456 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 23 23:56:09.891585 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 23 23:56:09.891710 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:56:09.891840 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 23 23:56:09.892408 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 23 23:56:09.892746 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 23 23:56:09.892940 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:56:09.893074 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 23 23:56:09.893220 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 23 23:56:09.895055 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 23 23:56:09.895186 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 23 23:56:09.895306 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 23 23:56:09.895437 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 23 23:56:09.895558 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 23 23:56:09.895674 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 23 23:56:09.895790 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 23 23:56:09.895954 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 23 23:56:09.896092 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 23 23:56:09.896231 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 23 23:56:09.896365 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 23 23:56:09.896488 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 23 23:56:09.896619 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 23 23:56:09.896759 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 23 23:56:09.901934 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 23 23:56:09.902095 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 23 23:56:09.902228 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 23 23:56:09.902360 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 23 23:56:09.902483 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 23 23:56:09.902611 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 23 23:56:09.902748 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 23 23:56:09.902907 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 23 23:56:09.903048 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 23 23:56:09.903182 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 23 23:56:09.903313 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 23 23:56:09.903440 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 23 23:56:09.903453 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 23 23:56:09.903462 kernel: PCI: CLS 0 bytes, default 64 Apr 23 23:56:09.903471 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 23 23:56:09.903480 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 23 23:56:09.903494 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Apr 23 23:56:09.903503 kernel: Initialise system trusted keyrings Apr 23 23:56:09.903512 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 23 23:56:09.903521 kernel: Key type asymmetric registered Apr 23 23:56:09.903530 kernel: Asymmetric key parser 'x509' registered Apr 23 23:56:09.903539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 23 23:56:09.903547 kernel: io scheduler mq-deadline registered Apr 23 23:56:09.903557 kernel: io scheduler kyber registered Apr 23 23:56:09.903565 kernel: io scheduler bfq registered Apr 23 23:56:09.903700 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 23 23:56:09.903828 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 23 23:56:09.906220 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 23 23:56:09.906545 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 23 23:56:09.906818 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 23 23:56:09.907148 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 23 23:56:09.907248 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 23 23:56:09.907344 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 23 23:56:09.907446 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 23 23:56:09.907542 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 23 23:56:09.907639 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 23 23:56:09.907736 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 23 23:56:09.907832 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 23 23:56:09.908484 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 23 23:56:09.908591 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 23 23:56:09.908689 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 23 23:56:09.908700 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 23 23:56:09.908798 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 23 23:56:09.908925 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 23 23:56:09.908933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 23 23:56:09.908939 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 23 23:56:09.908945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 23 23:56:09.908954 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 23 23:56:09.908960 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 23 23:56:09.908966 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 23 23:56:09.908972 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 23 23:56:09.908978 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 23 23:56:09.909108 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 23 23:56:09.909223 kernel: rtc_cmos 00:03: registered as rtc0 Apr 23 23:56:09.909318 kernel: rtc_cmos 00:03: setting system clock to 2026-04-23T23:56:09 UTC (1776988569) Apr 23 23:56:09.909415 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 23 23:56:09.909423 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Apr 23 23:56:09.909430 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 23 23:56:09.909436 kernel: efifb: probing for efifb Apr 23 23:56:09.909442 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Apr 23 23:56:09.909447 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 23 23:56:09.909453 kernel: efifb: scrolling: redraw Apr 23 23:56:09.909459 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 23 23:56:09.909465 kernel: Console: switching to colour frame buffer device 160x50 Apr 23 23:56:09.909474 kernel: fb0: EFI VGA frame buffer device Apr 23 23:56:09.909480 kernel: pstore: Using crash dump compression: deflate Apr 23 23:56:09.909486 kernel: pstore: Registered efi_pstore as persistent store backend Apr 23 23:56:09.909492 kernel: NET: Registered PF_INET6 protocol family Apr 23 23:56:09.909497 kernel: Segment Routing with IPv6 Apr 23 23:56:09.909503 kernel: In-situ OAM (IOAM) with IPv6 Apr 23 23:56:09.909509 kernel: NET: Registered PF_PACKET protocol family Apr 23 23:56:09.909515 kernel: Key type dns_resolver registered Apr 23 23:56:09.909521 kernel: IPI shorthand broadcast: enabled Apr 23 23:56:09.909529 kernel: sched_clock: Marking stable (2896011303, 267472523)->(3203444404, -39960578) Apr 23 23:56:09.909535 kernel: registered taskstats version 1 Apr 23 23:56:09.909541 kernel: Loading compiled-in X.509 certificates Apr 23 23:56:09.909547 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 23 23:56:09.909553 kernel: Demotion targets for Node 0: null Apr 23 23:56:09.909559 kernel: Key type .fscrypt registered Apr 23 23:56:09.909564 kernel: Key type fscrypt-provisioning registered Apr 23 23:56:09.909571 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 23 23:56:09.909578 kernel: ima: Allocated hash algorithm: sha1 Apr 23 23:56:09.909584 kernel: ima: No architecture policies found Apr 23 23:56:09.909590 kernel: clk: Disabling unused clocks Apr 23 23:56:09.909596 kernel: Warning: unable to open an initial console. Apr 23 23:56:09.909602 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 23 23:56:09.909608 kernel: Write protecting the kernel read-only data: 40960k Apr 23 23:56:09.909614 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 23 23:56:09.909619 kernel: Run /init as init process Apr 23 23:56:09.909625 kernel: with arguments: Apr 23 23:56:09.909633 kernel: /init Apr 23 23:56:09.909639 kernel: with environment: Apr 23 23:56:09.909645 kernel: HOME=/ Apr 23 23:56:09.909651 kernel: TERM=linux Apr 23 23:56:09.909658 systemd[1]: Successfully made /usr/ read-only. Apr 23 23:56:09.909667 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:56:09.909673 systemd[1]: Detected virtualization kvm. Apr 23 23:56:09.909679 systemd[1]: Detected architecture x86-64. Apr 23 23:56:09.909687 systemd[1]: Running in initrd. Apr 23 23:56:09.909693 systemd[1]: No hostname configured, using default hostname. Apr 23 23:56:09.909700 systemd[1]: Hostname set to . Apr 23 23:56:09.909706 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:56:09.909713 systemd[1]: Queued start job for default target initrd.target. Apr 23 23:56:09.909719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:56:09.909725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:56:09.909732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 23 23:56:09.909740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:56:09.909746 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 23 23:56:09.909753 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 23 23:56:09.909761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 23 23:56:09.909767 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 23 23:56:09.909773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:56:09.909779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:56:09.909787 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:56:09.909793 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:56:09.909800 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:56:09.909805 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:56:09.909811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:56:09.909818 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:56:09.909824 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 23 23:56:09.909830 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 23 23:56:09.909836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:56:09.909844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:56:09.909850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:56:09.909857 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:56:09.909863 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 23 23:56:09.909869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:56:09.909875 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 23 23:56:09.909915 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 23 23:56:09.909921 systemd[1]: Starting systemd-fsck-usr.service... Apr 23 23:56:09.909930 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:56:09.909936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:56:09.909942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:09.909949 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 23 23:56:09.909955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:56:09.909984 systemd-journald[198]: Collecting audit messages is disabled. Apr 23 23:56:09.910001 systemd[1]: Finished systemd-fsck-usr.service. Apr 23 23:56:09.910008 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:56:09.910016 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 23 23:56:09.910023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:09.910029 kernel: Bridge firewalling registered Apr 23 23:56:09.910034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:56:09.910041 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 23 23:56:09.910048 systemd-journald[198]: Journal started Apr 23 23:56:09.910062 systemd-journald[198]: Runtime Journal (/run/log/journal/d4a5d2af22a34f5f9d2076c31b63f945) is 8M, max 76.1M, 68.1M free. Apr 23 23:56:09.860475 systemd-modules-load[199]: Inserted module 'overlay' Apr 23 23:56:09.902956 systemd-modules-load[199]: Inserted module 'br_netfilter' Apr 23 23:56:09.926151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:56:09.926204 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:56:09.928395 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:56:09.935028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:56:09.937036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:56:09.938942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:56:09.939942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:56:09.944580 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 23 23:56:09.951196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:56:09.952731 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 23 23:56:09.958244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:56:09.959989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:56:09.971168 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 23 23:56:09.998432 systemd-resolved[237]: Positive Trust Anchors: Apr 23 23:56:09.998449 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:56:09.998479 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:56:10.002736 systemd-resolved[237]: Defaulting to hostname 'linux'. Apr 23 23:56:10.005416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:56:10.006069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:56:10.048952 kernel: SCSI subsystem initialized Apr 23 23:56:10.056921 kernel: Loading iSCSI transport class v2.0-870. Apr 23 23:56:10.065937 kernel: iscsi: registered transport (tcp) Apr 23 23:56:10.083555 kernel: iscsi: registered transport (qla4xxx) Apr 23 23:56:10.083617 kernel: QLogic iSCSI HBA Driver Apr 23 23:56:10.101797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:56:10.119726 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:56:10.121717 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:56:10.161066 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 23 23:56:10.162591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 23 23:56:10.203926 kernel: raid6: avx512x4 gen() 43889 MB/s Apr 23 23:56:10.221926 kernel: raid6: avx512x2 gen() 46899 MB/s Apr 23 23:56:10.239922 kernel: raid6: avx512x1 gen() 43933 MB/s Apr 23 23:56:10.257926 kernel: raid6: avx2x4 gen() 47195 MB/s Apr 23 23:56:10.275922 kernel: raid6: avx2x2 gen() 48546 MB/s Apr 23 23:56:10.294984 kernel: raid6: avx2x1 gen() 38047 MB/s Apr 23 23:56:10.295049 kernel: raid6: using algorithm avx2x2 gen() 48546 MB/s Apr 23 23:56:10.315015 kernel: raid6: .... xor() 37045 MB/s, rmw enabled Apr 23 23:56:10.315093 kernel: raid6: using avx512x2 recovery algorithm Apr 23 23:56:10.330926 kernel: xor: automatically using best checksumming function avx Apr 23 23:56:10.444918 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 23 23:56:10.450874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:56:10.452440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:56:10.480540 systemd-udevd[446]: Using default interface naming scheme 'v255'. Apr 23 23:56:10.485653 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:56:10.487776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 23 23:56:10.507219 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Apr 23 23:56:10.529244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:56:10.531266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:56:10.617370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:56:10.622151 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 23 23:56:10.695707 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 23 23:56:10.705327 kernel: ACPI: bus type USB registered Apr 23 23:56:10.705357 kernel: usbcore: registered new interface driver usbfs Apr 23 23:56:10.707210 kernel: usbcore: registered new interface driver hub Apr 23 23:56:10.708818 kernel: usbcore: registered new device driver usb Apr 23 23:56:10.727024 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:56:10.728907 kernel: scsi host0: Virtio SCSI HBA Apr 23 23:56:10.733040 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 23 23:56:10.739935 kernel: cryptd: max_cpu_qlen set to 1000 Apr 23 23:56:10.739967 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 23 23:56:10.752916 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 23 23:56:10.768924 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 23 23:56:10.782153 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:56:10.782490 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 23 23:56:10.783136 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 23 23:56:10.783308 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 23 23:56:10.788626 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 23 23:56:10.791711 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 23 23:56:10.792299 kernel: hub 1-0:1.0: USB hub found Apr 23 23:56:10.792781 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 23 23:56:10.796487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:56:10.801873 kernel: hub 1-0:1.0: 4 ports detected Apr 23 23:56:10.802057 kernel: AES CTR mode by8 optimization enabled Apr 23 23:56:10.802067 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 23 23:56:10.796646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:10.802662 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:10.805372 kernel: libata version 3.00 loaded. Apr 23 23:56:10.803729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:10.815924 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 23 23:56:10.817516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:56:10.818207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:10.826922 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 23 23:56:10.826961 kernel: GPT:17805311 != 160006143 Apr 23 23:56:10.826971 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 23 23:56:10.826980 kernel: GPT:17805311 != 160006143 Apr 23 23:56:10.826988 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 23 23:56:10.828934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:56:10.831411 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 23 23:56:10.835094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:10.841652 kernel: hub 2-0:1.0: USB hub found Apr 23 23:56:10.841840 kernel: hub 2-0:1.0: 4 ports detected Apr 23 23:56:10.843858 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:56:10.870673 kernel: ahci 0000:00:1f.2: version 3.0 Apr 23 23:56:10.870969 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 23 23:56:10.873781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:10.885344 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 23 23:56:10.885830 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 23 23:56:10.886048 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 23 23:56:10.886454 kernel: scsi host1: ahci Apr 23 23:56:10.887764 kernel: scsi host2: ahci Apr 23 23:56:10.888511 kernel: scsi host3: ahci Apr 23 23:56:10.892919 kernel: scsi host4: ahci Apr 23 23:56:10.893073 kernel: scsi host5: ahci Apr 23 23:56:10.897909 kernel: scsi host6: ahci Apr 23 23:56:10.908910 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 lpm-pol 1 Apr 23 23:56:10.908940 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 lpm-pol 1 Apr 23 23:56:10.908950 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 lpm-pol 1 Apr 23 23:56:10.908960 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 lpm-pol 1 Apr 23 23:56:10.908968 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 lpm-pol 1 Apr 23 23:56:10.908976 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 lpm-pol 1 Apr 23 23:56:10.907618 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 23 23:56:10.929116 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 23 23:56:10.939589 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 23 23:56:10.940774 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 23 23:56:10.948195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:56:10.950236 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 23 23:56:10.964171 disk-uuid[644]: Primary Header is updated. Apr 23 23:56:10.964171 disk-uuid[644]: Secondary Entries is updated. Apr 23 23:56:10.964171 disk-uuid[644]: Secondary Header is updated. Apr 23 23:56:10.970904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:56:10.985911 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:56:11.058115 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 23 23:56:11.197946 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 23 23:56:11.225762 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 23 23:56:11.225860 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 23 23:56:11.232055 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 23 23:56:11.232108 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 23 23:56:11.232141 kernel: ata1.00: LPM support broken, forcing max_power Apr 23 23:56:11.233928 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 23 23:56:11.236757 kernel: ata1.00: applying bridge limits Apr 23 23:56:11.240644 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 23 23:56:11.244459 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 23 23:56:11.244494 kernel: ata1.00: LPM support broken, forcing max_power Apr 23 23:56:11.247479 kernel: ata1.00: configured for UDMA/100 Apr 23 23:56:11.252947 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 23 23:56:11.278914 kernel: usbcore: registered new interface driver usbhid Apr 23 23:56:11.278970 kernel: usbhid: USB HID core driver Apr 23 23:56:11.292812 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Apr 23 23:56:11.292859 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 23 23:56:11.302240 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 23 23:56:11.302484 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 23 23:56:11.325960 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 23 23:56:11.585475 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 23 23:56:11.588199 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:56:11.589273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:56:11.590345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:56:11.593015 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 23 23:56:11.626315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:56:11.991001 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:56:11.991955 disk-uuid[645]: The operation has completed successfully. Apr 23 23:56:12.059304 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 23 23:56:12.059420 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 23 23:56:12.090018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 23 23:56:12.107975 sh[678]: Success Apr 23 23:56:12.127245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 23 23:56:12.127288 kernel: device-mapper: uevent: version 1.0.3 Apr 23 23:56:12.129045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 23 23:56:12.142939 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 23 23:56:12.181839 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 23 23:56:12.185581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 23 23:56:12.194778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 23 23:56:12.206920 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (690) Apr 23 23:56:12.208031 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 23 23:56:12.209915 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 23 23:56:12.223322 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 23 23:56:12.223389 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 23 23:56:12.223399 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 23 23:56:12.226822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 23 23:56:12.227834 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:56:12.228805 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 23 23:56:12.230983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 23 23:56:12.232056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 23 23:56:12.262921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (725) Apr 23 23:56:12.262974 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 23 23:56:12.266927 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 23 23:56:12.273925 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:56:12.273981 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:56:12.274977 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:56:12.284630 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 23 23:56:12.284248 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 23 23:56:12.286353 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 23 23:56:12.364984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:56:12.369017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:56:12.386140 ignition[786]: Ignition 2.22.0 Apr 23 23:56:12.386151 ignition[786]: Stage: fetch-offline Apr 23 23:56:12.386180 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:12.386187 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:12.386265 ignition[786]: parsed url from cmdline: "" Apr 23 23:56:12.386269 ignition[786]: no config URL provided Apr 23 23:56:12.386274 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:56:12.390487 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:56:12.386280 ignition[786]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:56:12.386290 ignition[786]: failed to fetch config: resource requires networking Apr 23 23:56:12.386411 ignition[786]: Ignition finished successfully Apr 23 23:56:12.408798 systemd-networkd[864]: lo: Link UP Apr 23 23:56:12.408809 systemd-networkd[864]: lo: Gained carrier Apr 23 23:56:12.411226 systemd-networkd[864]: Enumeration completed Apr 23 23:56:12.411335 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:56:12.411850 systemd[1]: Reached target network.target - Network. Apr 23 23:56:12.412082 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:12.412087 systemd-networkd[864]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:56:12.412777 systemd-networkd[864]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:12.412781 systemd-networkd[864]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:56:12.413389 systemd-networkd[864]: eth0: Link UP Apr 23 23:56:12.413528 systemd-networkd[864]: eth1: Link UP Apr 23 23:56:12.413676 systemd-networkd[864]: eth0: Gained carrier Apr 23 23:56:12.413685 systemd-networkd[864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:12.414334 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 23 23:56:12.420347 systemd-networkd[864]: eth1: Gained carrier Apr 23 23:56:12.420358 systemd-networkd[864]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:12.438995 ignition[868]: Ignition 2.22.0 Apr 23 23:56:12.439006 ignition[868]: Stage: fetch Apr 23 23:56:12.439124 ignition[868]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:12.439133 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:12.439203 ignition[868]: parsed url from cmdline: "" Apr 23 23:56:12.439207 ignition[868]: no config URL provided Apr 23 23:56:12.439212 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:56:12.439220 ignition[868]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:56:12.439247 ignition[868]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 23 23:56:12.439383 ignition[868]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 23 23:56:12.454932 systemd-networkd[864]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:56:12.486942 systemd-networkd[864]: eth0: DHCPv4 address 135.181.100.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:56:12.639653 ignition[868]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 23 23:56:12.648575 ignition[868]: GET result: OK Apr 23 23:56:12.648695 ignition[868]: parsing config with SHA512: fcb44507b120e4da6c8ce75babd4b5fd5fb9d26b9e1ebc2156d984bc9e38cd10528f4263f6a6be37ddbe46a58f3d2a2bb42212e78f5625d6492275de3b2900d3 Apr 23 23:56:12.658742 unknown[868]: fetched base config from "system" Apr 23 23:56:12.660757 unknown[868]: fetched base config from "system" Apr 23 23:56:12.660781 unknown[868]: fetched user config from "hetzner" Apr 23 23:56:12.661389 ignition[868]: fetch: fetch complete Apr 23 23:56:12.661402 ignition[868]: fetch: fetch passed Apr 23 23:56:12.661501 ignition[868]: Ignition finished successfully Apr 23 23:56:12.669251 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 23 23:56:12.674041 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 23 23:56:12.729734 ignition[876]: Ignition 2.22.0 Apr 23 23:56:12.729756 ignition[876]: Stage: kargs Apr 23 23:56:12.729983 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:12.729999 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:12.731061 ignition[876]: kargs: kargs passed Apr 23 23:56:12.731126 ignition[876]: Ignition finished successfully Apr 23 23:56:12.735339 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 23 23:56:12.739212 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 23 23:56:12.792762 ignition[882]: Ignition 2.22.0 Apr 23 23:56:12.792776 ignition[882]: Stage: disks Apr 23 23:56:12.792963 ignition[882]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:12.792975 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:12.796298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 23 23:56:12.793612 ignition[882]: disks: disks passed Apr 23 23:56:12.799010 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 23 23:56:12.793661 ignition[882]: Ignition finished successfully Apr 23 23:56:12.799970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 23 23:56:12.800934 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:56:12.801233 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:56:12.802374 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:56:12.804264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 23 23:56:12.831601 systemd-fsck[890]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 23 23:56:12.835542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 23 23:56:12.837750 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 23 23:56:12.962941 kernel: EXT4-fs (sda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 23 23:56:12.963942 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 23 23:56:12.965304 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 23 23:56:12.968230 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:56:12.971299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 23 23:56:12.980137 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 23 23:56:12.980846 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 23 23:56:12.980931 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:56:12.986015 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 23 23:56:12.987675 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 23 23:56:12.995962 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (898) Apr 23 23:56:13.001933 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 23 23:56:13.002024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 23 23:56:13.015663 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:56:13.015730 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:56:13.018405 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:56:13.021555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:56:13.051830 coreos-metadata[900]: Apr 23 23:56:13.051 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 23 23:56:13.053352 coreos-metadata[900]: Apr 23 23:56:13.053 INFO Fetch successful Apr 23 23:56:13.054948 coreos-metadata[900]: Apr 23 23:56:13.053 INFO wrote hostname ci-4459-2-4-n-19674e2966 to /sysroot/etc/hostname Apr 23 23:56:13.056819 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Apr 23 23:56:13.058211 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:56:13.062072 initrd-setup-root[934]: cut: /sysroot/etc/group: No such file or directory Apr 23 23:56:13.066566 initrd-setup-root[941]: cut: /sysroot/etc/shadow: No such file or directory Apr 23 23:56:13.070142 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Apr 23 23:56:13.150722 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 23 23:56:13.152372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 23 23:56:13.153988 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 23 23:56:13.170967 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 23 23:56:13.196086 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 23 23:56:13.204584 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 23 23:56:13.211757 ignition[1017]: INFO : Ignition 2.22.0 Apr 23 23:56:13.211757 ignition[1017]: INFO : Stage: mount Apr 23 23:56:13.214012 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:13.214012 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:13.214012 ignition[1017]: INFO : mount: mount passed Apr 23 23:56:13.214012 ignition[1017]: INFO : Ignition finished successfully Apr 23 23:56:13.215628 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 23 23:56:13.217224 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 23 23:56:13.235100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:56:13.263272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1027) Apr 23 23:56:13.263356 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 23 23:56:13.263382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 23 23:56:13.271052 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:56:13.271132 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:56:13.274237 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:56:13.279031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:56:13.311543 ignition[1044]: INFO : Ignition 2.22.0 Apr 23 23:56:13.311543 ignition[1044]: INFO : Stage: files Apr 23 23:56:13.313573 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:13.313573 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:13.313573 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Apr 23 23:56:13.315720 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 23 23:56:13.315720 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 23 23:56:13.318273 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 23 23:56:13.318878 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 23 23:56:13.318878 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 23 23:56:13.318741 unknown[1044]: wrote ssh authorized keys file for user: core Apr 23 23:56:13.321377 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 23 23:56:13.321377 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 23 23:56:13.623105 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 23 23:56:13.876258 systemd-networkd[864]: eth1: Gained IPv6LL Apr 23 23:56:14.060477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 23 23:56:14.060477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 23 23:56:14.063723 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 23 23:56:14.388273 systemd-networkd[864]: eth0: Gained IPv6LL Apr 23 23:56:14.538290 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 23 23:56:14.841701 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 23 23:56:14.841701 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 23 23:56:14.843371 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:56:14.846116 ignition[1044]: INFO : files: files passed Apr 23 23:56:14.846116 ignition[1044]: INFO : Ignition finished successfully Apr 23 23:56:14.846648 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 23 23:56:14.849044 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 23 23:56:14.853108 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 23 23:56:14.869239 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 23 23:56:14.869802 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 23 23:56:14.876817 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:56:14.876817 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:56:14.878851 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:56:14.879784 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:56:14.881699 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 23 23:56:14.883988 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 23 23:56:14.929137 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 23 23:56:14.929263 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 23 23:56:14.930764 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 23 23:56:14.931416 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 23 23:56:14.932325 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 23 23:56:14.933292 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 23 23:56:14.951051 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:56:14.952950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 23 23:56:14.970658 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:56:14.971229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:56:14.973098 systemd[1]: Stopped target timers.target - Timer Units. Apr 23 23:56:14.974374 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 23 23:56:14.974505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:56:14.975597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 23 23:56:14.976310 systemd[1]: Stopped target basic.target - Basic System. Apr 23 23:56:14.977391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 23 23:56:14.978549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:56:14.979281 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 23 23:56:14.979724 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:56:14.980429 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 23 23:56:14.981954 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:56:14.983153 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 23 23:56:14.983602 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 23 23:56:14.984536 systemd[1]: Stopped target swap.target - Swaps. Apr 23 23:56:14.985129 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 23 23:56:14.985259 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:56:14.986512 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:56:14.987168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:56:14.987949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 23 23:56:14.988027 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:56:14.988819 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 23 23:56:14.988950 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 23 23:56:14.989826 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 23 23:56:14.989964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:56:14.990567 systemd[1]: ignition-files.service: Deactivated successfully. Apr 23 23:56:14.990666 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 23 23:56:14.991239 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 23 23:56:14.991330 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:56:14.993998 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 23 23:56:14.994342 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 23 23:56:14.994419 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:56:14.995965 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 23 23:56:14.998844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 23 23:56:14.999009 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:56:15.001262 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 23 23:56:15.001339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:56:15.007346 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 23 23:56:15.007771 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 23 23:56:15.020916 ignition[1098]: INFO : Ignition 2.22.0 Apr 23 23:56:15.020916 ignition[1098]: INFO : Stage: umount Apr 23 23:56:15.023317 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:56:15.023317 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:56:15.023317 ignition[1098]: INFO : umount: umount passed Apr 23 23:56:15.023317 ignition[1098]: INFO : Ignition finished successfully Apr 23 23:56:15.024611 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 23 23:56:15.024927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 23 23:56:15.027195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 23 23:56:15.028456 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 23 23:56:15.028829 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 23 23:56:15.029615 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 23 23:56:15.029656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 23 23:56:15.030686 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 23 23:56:15.030721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 23 23:56:15.031420 systemd[1]: Stopped target network.target - Network. Apr 23 23:56:15.032081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 23 23:56:15.032121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:56:15.033970 systemd[1]: Stopped target paths.target - Path Units. Apr 23 23:56:15.034771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 23 23:56:15.037937 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:56:15.038415 systemd[1]: Stopped target slices.target - Slice Units. Apr 23 23:56:15.038731 systemd[1]: Stopped target sockets.target - Socket Units. Apr 23 23:56:15.039633 systemd[1]: iscsid.socket: Deactivated successfully. Apr 23 23:56:15.039676 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:56:15.040663 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 23 23:56:15.040696 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:56:15.041616 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 23 23:56:15.041667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 23 23:56:15.042248 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 23 23:56:15.042284 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 23 23:56:15.043081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 23 23:56:15.043646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 23 23:56:15.044736 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 23 23:56:15.044815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 23 23:56:15.047278 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 23 23:56:15.047381 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 23 23:56:15.050555 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 23 23:56:15.051018 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 23 23:56:15.051090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 23 23:56:15.051479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 23 23:56:15.051517 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:56:15.056181 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:56:15.056461 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 23 23:56:15.056572 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 23 23:56:15.057750 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 23 23:56:15.058295 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 23 23:56:15.058692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 23 23:56:15.058728 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:56:15.060156 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 23 23:56:15.060596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 23 23:56:15.060660 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:56:15.061126 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 23 23:56:15.061164 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:56:15.061574 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 23 23:56:15.061608 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 23 23:56:15.062080 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:56:15.064377 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 23 23:56:15.075141 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 23 23:56:15.075283 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:56:15.076305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 23 23:56:15.076372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 23 23:56:15.077070 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 23 23:56:15.077104 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:56:15.077664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 23 23:56:15.077703 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:56:15.078734 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 23 23:56:15.078770 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 23 23:56:15.079926 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 23 23:56:15.079971 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:56:15.082987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 23 23:56:15.083341 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 23 23:56:15.083382 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:56:15.085003 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 23 23:56:15.085046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:56:15.086461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:56:15.086509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:15.095798 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 23 23:56:15.095982 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 23 23:56:15.098131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 23 23:56:15.098230 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 23 23:56:15.099208 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 23 23:56:15.100676 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 23 23:56:15.124325 systemd[1]: Switching root. Apr 23 23:56:15.153720 systemd-journald[198]: Journal stopped Apr 23 23:56:16.390970 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Apr 23 23:56:16.391055 kernel: SELinux: policy capability network_peer_controls=1 Apr 23 23:56:16.391066 kernel: SELinux: policy capability open_perms=1 Apr 23 23:56:16.391075 kernel: SELinux: policy capability extended_socket_class=1 Apr 23 23:56:16.391084 kernel: SELinux: policy capability always_check_network=0 Apr 23 23:56:16.391093 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 23 23:56:16.391101 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 23 23:56:16.391110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 23 23:56:16.391118 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 23 23:56:16.391129 kernel: SELinux: policy capability userspace_initial_context=0 Apr 23 23:56:16.391140 kernel: audit: type=1403 audit(1776988575.372:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 23 23:56:16.391153 systemd[1]: Successfully loaded SELinux policy in 69.008ms. Apr 23 23:56:16.391175 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.480ms. Apr 23 23:56:16.391189 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:56:16.391203 systemd[1]: Detected virtualization kvm. Apr 23 23:56:16.391213 systemd[1]: Detected architecture x86-64. Apr 23 23:56:16.391222 systemd[1]: Detected first boot. Apr 23 23:56:16.391239 systemd[1]: Hostname set to . Apr 23 23:56:16.391254 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:56:16.391267 zram_generator::config[1142]: No configuration found. Apr 23 23:56:16.391281 kernel: Guest personality initialized and is inactive Apr 23 23:56:16.391294 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 23 23:56:16.391306 kernel: Initialized host personality Apr 23 23:56:16.391318 kernel: NET: Registered PF_VSOCK protocol family Apr 23 23:56:16.391330 systemd[1]: Populated /etc with preset unit settings. Apr 23 23:56:16.391344 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 23 23:56:16.391357 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 23 23:56:16.391372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 23 23:56:16.391387 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 23 23:56:16.391397 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 23 23:56:16.391408 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 23 23:56:16.391422 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 23 23:56:16.391431 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 23 23:56:16.391445 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 23 23:56:16.391455 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 23 23:56:16.391465 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 23 23:56:16.391474 systemd[1]: Created slice user.slice - User and Session Slice. Apr 23 23:56:16.391483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:56:16.391492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:56:16.391501 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 23 23:56:16.391510 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 23 23:56:16.391521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 23 23:56:16.391530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:56:16.391540 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 23 23:56:16.391548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:56:16.391557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:56:16.391567 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 23 23:56:16.391576 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 23 23:56:16.391585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 23 23:56:16.391596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 23 23:56:16.391606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:56:16.391615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:56:16.391624 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:56:16.391633 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:56:16.391644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 23 23:56:16.391655 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 23 23:56:16.391664 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 23 23:56:16.391673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:56:16.391685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:56:16.391698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:56:16.391712 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 23 23:56:16.391724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 23 23:56:16.391737 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 23 23:56:16.391750 systemd[1]: Mounting media.mount - External Media Directory... Apr 23 23:56:16.391763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:16.391775 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 23 23:56:16.391788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 23 23:56:16.391803 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 23 23:56:16.391816 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 23 23:56:16.391827 systemd[1]: Reached target machines.target - Containers. Apr 23 23:56:16.391837 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 23 23:56:16.391846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:56:16.391855 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:56:16.391864 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 23 23:56:16.391873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:56:16.391908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:56:16.391921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:56:16.391930 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 23 23:56:16.391939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:56:16.391948 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 23 23:56:16.391957 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 23 23:56:16.391966 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 23 23:56:16.391975 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 23 23:56:16.391984 systemd[1]: Stopped systemd-fsck-usr.service. Apr 23 23:56:16.391995 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:56:16.392004 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:56:16.392013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:56:16.392022 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:56:16.392032 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 23 23:56:16.392043 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 23 23:56:16.392054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:56:16.392063 systemd[1]: verity-setup.service: Deactivated successfully. Apr 23 23:56:16.392072 systemd[1]: Stopped verity-setup.service. Apr 23 23:56:16.392081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:16.392093 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 23 23:56:16.392102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 23 23:56:16.392111 kernel: loop: module loaded Apr 23 23:56:16.392120 systemd[1]: Mounted media.mount - External Media Directory. Apr 23 23:56:16.392129 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 23 23:56:16.392138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 23 23:56:16.392147 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 23 23:56:16.392155 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 23 23:56:16.392164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:56:16.392175 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 23 23:56:16.392184 kernel: fuse: init (API version 7.41) Apr 23 23:56:16.392193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 23 23:56:16.392201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:56:16.392210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:56:16.392225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:56:16.392234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:56:16.392242 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 23 23:56:16.392284 systemd-journald[1223]: Collecting audit messages is disabled. Apr 23 23:56:16.392306 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 23 23:56:16.392315 kernel: ACPI: bus type drm_connector registered Apr 23 23:56:16.392324 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:56:16.392333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:56:16.392342 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:56:16.392352 systemd-journald[1223]: Journal started Apr 23 23:56:16.392372 systemd-journald[1223]: Runtime Journal (/run/log/journal/d4a5d2af22a34f5f9d2076c31b63f945) is 8M, max 76.1M, 68.1M free. Apr 23 23:56:16.000450 systemd[1]: Queued start job for default target multi-user.target. Apr 23 23:56:16.395926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:56:16.395959 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:56:16.013542 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 23 23:56:16.014288 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 23 23:56:16.400323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:56:16.401020 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 23 23:56:16.401657 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 23 23:56:16.410016 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:56:16.413695 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:56:16.417042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 23 23:56:16.423042 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 23 23:56:16.425032 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 23 23:56:16.425073 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:56:16.428087 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 23 23:56:16.431023 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 23 23:56:16.431544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:56:16.435008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 23 23:56:16.437999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 23 23:56:16.438385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:56:16.443131 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 23 23:56:16.443519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:56:16.445864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:56:16.451321 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 23 23:56:16.454038 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 23 23:56:16.457019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 23 23:56:16.457442 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 23 23:56:16.467599 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 23 23:56:16.468381 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 23 23:56:16.472119 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 23 23:56:16.490863 systemd-journald[1223]: Time spent on flushing to /var/log/journal/d4a5d2af22a34f5f9d2076c31b63f945 is 36.223ms for 1246 entries. Apr 23 23:56:16.490863 systemd-journald[1223]: System Journal (/var/log/journal/d4a5d2af22a34f5f9d2076c31b63f945) is 8M, max 584.8M, 576.8M free. Apr 23 23:56:16.562685 systemd-journald[1223]: Received client request to flush runtime journal. Apr 23 23:56:16.562747 kernel: loop0: detected capacity change from 0 to 110984 Apr 23 23:56:16.562843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 23 23:56:16.511865 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 23 23:56:16.556193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:56:16.563859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 23 23:56:16.568723 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:56:16.572921 kernel: loop1: detected capacity change from 0 to 219192 Apr 23 23:56:16.592642 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 23 23:56:16.598085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:56:16.619096 kernel: loop2: detected capacity change from 0 to 8 Apr 23 23:56:16.638303 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Apr 23 23:56:16.638321 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Apr 23 23:56:16.645943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:56:16.649029 kernel: loop3: detected capacity change from 0 to 128560 Apr 23 23:56:16.684003 kernel: loop4: detected capacity change from 0 to 110984 Apr 23 23:56:16.698911 kernel: loop5: detected capacity change from 0 to 219192 Apr 23 23:56:16.719935 kernel: loop6: detected capacity change from 0 to 8 Apr 23 23:56:16.723917 kernel: loop7: detected capacity change from 0 to 128560 Apr 23 23:56:16.738697 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 23 23:56:16.739282 (sd-merge)[1293]: Merged extensions into '/usr'. Apr 23 23:56:16.744186 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Apr 23 23:56:16.745148 systemd[1]: Reloading... Apr 23 23:56:16.821958 zram_generator::config[1315]: No configuration found. Apr 23 23:56:16.948007 ldconfig[1262]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 23 23:56:17.035555 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 23 23:56:17.035818 systemd[1]: Reloading finished in 289 ms. Apr 23 23:56:17.052431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 23 23:56:17.053223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 23 23:56:17.054023 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 23 23:56:17.068118 systemd[1]: Starting ensure-sysext.service... Apr 23 23:56:17.070988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:56:17.073992 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:56:17.092402 systemd[1]: Reload requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Apr 23 23:56:17.092417 systemd[1]: Reloading... Apr 23 23:56:17.109919 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Apr 23 23:56:17.110909 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 23 23:56:17.110947 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 23 23:56:17.111193 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 23 23:56:17.111513 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 23 23:56:17.112297 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 23 23:56:17.112561 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 23 23:56:17.112652 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Apr 23 23:56:17.121969 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:56:17.122949 systemd-tmpfiles[1364]: Skipping /boot Apr 23 23:56:17.142277 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:56:17.142738 systemd-tmpfiles[1364]: Skipping /boot Apr 23 23:56:17.208621 zram_generator::config[1405]: No configuration found. Apr 23 23:56:17.378917 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 23 23:56:17.385914 kernel: mousedev: PS/2 mouse device common for all mice Apr 23 23:56:17.398323 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 23 23:56:17.398391 systemd[1]: Reloading finished in 305 ms. Apr 23 23:56:17.407487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:56:17.409314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:56:17.419934 kernel: ACPI: button: Power Button [PWRF] Apr 23 23:56:17.438336 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 23 23:56:17.442232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.444595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:56:17.450198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 23 23:56:17.451045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:56:17.457826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:56:17.460707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:56:17.464239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:56:17.464945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:56:17.465252 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:56:17.466726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 23 23:56:17.470434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:56:17.474226 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:56:17.476436 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 23 23:56:17.476758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.480233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.480366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:56:17.480490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:56:17.480548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:56:17.480597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.483574 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.483736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:56:17.493130 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:56:17.494059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:56:17.494136 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:56:17.494222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 23 23:56:17.502198 systemd[1]: Finished ensure-sysext.service. Apr 23 23:56:17.507541 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 23 23:56:17.512679 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 23 23:56:17.517644 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 23 23:56:17.521525 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 23 23:56:17.525473 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 23 23:56:17.541989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:56:17.542181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:56:17.548096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 23 23:56:17.550998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 23 23:56:17.562291 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:56:17.562512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:56:17.563074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:56:17.565987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:56:17.568352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:56:17.569267 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 23 23:56:17.571357 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:56:17.576055 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:56:17.576968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:56:17.596556 augenrules[1535]: No rules Apr 23 23:56:17.597335 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:56:17.597564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:56:17.616842 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 23 23:56:17.617214 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 23 23:56:17.619911 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 23 23:56:17.642925 kernel: EDAC MC: Ver: 3.0.0 Apr 23 23:56:17.683117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:17.683954 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 23 23:56:17.706965 kernel: Console: switching to colour dummy device 80x25 Apr 23 23:56:17.710954 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 23 23:56:17.717395 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 23 23:56:17.717463 kernel: [drm] features: -context_init Apr 23 23:56:17.734359 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 23 23:56:17.735919 kernel: [drm] number of scanouts: 1 Apr 23 23:56:17.747918 kernel: [drm] number of cap sets: 0 Apr 23 23:56:17.757958 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 23 23:56:17.757142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:56:17.757969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:17.761289 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:56:17.768134 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 23 23:56:17.768222 kernel: Console: switching to colour frame buffer device 160x50 Apr 23 23:56:17.765311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:17.780926 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 23 23:56:17.789141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:56:17.795930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 23 23:56:17.798772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:56:17.800093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:17.802429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:56:17.830757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 23 23:56:17.876724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:56:17.938434 systemd-resolved[1494]: Positive Trust Anchors: Apr 23 23:56:17.938792 systemd-resolved[1494]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:56:17.938875 systemd-resolved[1494]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:56:17.943970 systemd-resolved[1494]: Using system hostname 'ci-4459-2-4-n-19674e2966'. Apr 23 23:56:17.945809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:56:17.946257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:56:17.953685 systemd-networkd[1490]: lo: Link UP Apr 23 23:56:17.954040 systemd-networkd[1490]: lo: Gained carrier Apr 23 23:56:17.957252 systemd-networkd[1490]: Enumeration completed Apr 23 23:56:17.957369 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:56:17.957608 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:17.957613 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:56:17.958225 systemd[1]: Reached target network.target - Network. Apr 23 23:56:17.959022 systemd-networkd[1490]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:17.959030 systemd-networkd[1490]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:56:17.959937 systemd-networkd[1490]: eth0: Link UP Apr 23 23:56:17.960479 systemd-networkd[1490]: eth0: Gained carrier Apr 23 23:56:17.960538 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:17.962104 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 23 23:56:17.965039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 23 23:56:17.965258 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 23 23:56:17.965423 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:56:17.965596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 23 23:56:17.965689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 23 23:56:17.965760 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 23 23:56:17.965821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 23 23:56:17.965920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 23 23:56:17.965940 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:56:17.965996 systemd[1]: Reached target time-set.target - System Time Set. Apr 23 23:56:17.966197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 23 23:56:17.966338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 23 23:56:17.968209 systemd-networkd[1490]: eth1: Link UP Apr 23 23:56:17.969199 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:56:17.969502 systemd-networkd[1490]: eth1: Gained carrier Apr 23 23:56:17.969527 systemd-networkd[1490]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:56:17.971894 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 23 23:56:17.976425 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 23 23:56:17.981410 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 23 23:56:17.985808 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 23 23:56:17.986785 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 23 23:56:17.997775 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 23 23:56:17.998937 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 23 23:56:18.000370 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 23 23:56:18.005377 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:56:18.007726 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:56:18.007985 systemd-networkd[1490]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:56:18.008909 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:18.010265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:56:18.010286 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:56:18.012083 systemd[1]: Starting containerd.service - containerd container runtime... Apr 23 23:56:18.018152 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 23 23:56:18.021982 systemd-networkd[1490]: eth0: DHCPv4 address 135.181.100.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:56:18.022319 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:18.023496 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:18.026125 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 23 23:56:18.030058 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 23 23:56:18.035102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 23 23:56:18.042515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 23 23:56:18.044317 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 23 23:56:18.046025 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 23 23:56:18.051024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 23 23:56:18.052707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 23 23:56:18.063022 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 23 23:56:18.067230 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing passwd entry cache Apr 23 23:56:18.067266 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 23 23:56:18.067631 oslogin_cache_refresh[1581]: Refreshing passwd entry cache Apr 23 23:56:18.073700 coreos-metadata[1576]: Apr 23 23:56:18.073 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 23 23:56:18.075863 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 23 23:56:18.081462 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting users, quitting Apr 23 23:56:18.081462 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 23 23:56:18.081462 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing group entry cache Apr 23 23:56:18.081047 oslogin_cache_refresh[1581]: Failure getting users, quitting Apr 23 23:56:18.081069 oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 23 23:56:18.081119 oslogin_cache_refresh[1581]: Refreshing group entry cache Apr 23 23:56:18.083662 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting groups, quitting Apr 23 23:56:18.083662 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 23 23:56:18.081809 oslogin_cache_refresh[1581]: Failure getting groups, quitting Apr 23 23:56:18.081818 oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 23 23:56:18.089183 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 23 23:56:18.095269 jq[1579]: false Apr 23 23:56:18.091615 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 23 23:56:18.093217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 23 23:56:18.095811 systemd[1]: Starting update-engine.service - Update Engine... Apr 23 23:56:18.101511 coreos-metadata[1576]: Apr 23 23:56:18.097 INFO Fetch successful Apr 23 23:56:18.101511 coreos-metadata[1576]: Apr 23 23:56:18.097 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 23 23:56:18.101511 coreos-metadata[1576]: Apr 23 23:56:18.098 INFO Fetch successful Apr 23 23:56:18.101989 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 23 23:56:18.107709 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 23 23:56:18.122547 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 23 23:56:18.129358 jq[1599]: true Apr 23 23:56:18.126631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 23 23:56:18.126850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 23 23:56:18.127233 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 23 23:56:18.127411 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 23 23:56:18.128057 systemd[1]: motdgen.service: Deactivated successfully. Apr 23 23:56:18.128232 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 23 23:56:18.135420 extend-filesystems[1580]: Found /dev/sda6 Apr 23 23:56:18.140914 extend-filesystems[1580]: Found /dev/sda9 Apr 23 23:56:18.143949 extend-filesystems[1580]: Checking size of /dev/sda9 Apr 23 23:56:18.160294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 23 23:56:18.160946 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 23 23:56:18.199247 tar[1607]: linux-amd64/LICENSE Apr 23 23:56:18.199511 tar[1607]: linux-amd64/helm Apr 23 23:56:18.203385 update_engine[1597]: I20260423 23:56:18.202852 1597 main.cc:92] Flatcar Update Engine starting Apr 23 23:56:18.204929 jq[1608]: true Apr 23 23:56:18.206242 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 23 23:56:18.207371 extend-filesystems[1580]: Resized partition /dev/sda9 Apr 23 23:56:18.218059 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Apr 23 23:56:18.230770 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 23 23:56:18.229189 systemd-logind[1590]: New seat seat0. Apr 23 23:56:18.234511 systemd-logind[1590]: Watching system buttons on /dev/input/event3 (Power Button) Apr 23 23:56:18.234536 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 23 23:56:18.234766 systemd[1]: Started systemd-logind.service - User Login Management. Apr 23 23:56:18.270544 dbus-daemon[1577]: [system] SELinux support is enabled Apr 23 23:56:18.270718 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 23 23:56:18.275197 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 23 23:56:18.275223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 23 23:56:18.276335 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 23 23:56:18.276348 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 23 23:56:18.302838 systemd[1]: Started update-engine.service - Update Engine. Apr 23 23:56:18.309600 update_engine[1597]: I20260423 23:56:18.309351 1597 update_check_scheduler.cc:74] Next update check in 9m59s Apr 23 23:56:18.332417 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 23 23:56:18.350197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 23 23:56:18.355222 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 23 23:56:18.416912 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:56:18.417687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 23 23:56:18.428916 systemd[1]: Starting sshkeys.service... Apr 23 23:56:18.451325 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 23 23:56:18.456159 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 23 23:56:18.479493 sshd_keygen[1600]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 23 23:56:18.513315 coreos-metadata[1667]: Apr 23 23:56:18.513 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 23 23:56:18.515362 coreos-metadata[1667]: Apr 23 23:56:18.514 INFO Fetch successful Apr 23 23:56:18.516255 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 23 23:56:18.517575 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 23 23:56:18.524394 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 23 23:56:18.526784 unknown[1667]: wrote ssh authorized keys file for user: core Apr 23 23:56:18.531203 containerd[1624]: time="2026-04-23T23:56:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 23 23:56:18.532676 containerd[1624]: time="2026-04-23T23:56:18.532606742Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 23 23:56:18.552179 containerd[1624]: time="2026-04-23T23:56:18.552121720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.54µs" Apr 23 23:56:18.552179 containerd[1624]: time="2026-04-23T23:56:18.552164910Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 23 23:56:18.552179 containerd[1624]: time="2026-04-23T23:56:18.552185440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 23 23:56:18.554322 containerd[1624]: time="2026-04-23T23:56:18.553298811Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 23 23:56:18.554322 containerd[1624]: time="2026-04-23T23:56:18.553558061Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 23 23:56:18.554322 containerd[1624]: time="2026-04-23T23:56:18.553588911Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554322 containerd[1624]: time="2026-04-23T23:56:18.553954131Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554322 containerd[1624]: time="2026-04-23T23:56:18.553974511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554749 containerd[1624]: time="2026-04-23T23:56:18.554521501Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554749 containerd[1624]: time="2026-04-23T23:56:18.554545821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554749 containerd[1624]: time="2026-04-23T23:56:18.554559031Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554749 containerd[1624]: time="2026-04-23T23:56:18.554567771Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554749 containerd[1624]: time="2026-04-23T23:56:18.554643781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554857 containerd[1624]: time="2026-04-23T23:56:18.554831401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:56:18.554913 containerd[1624]: time="2026-04-23T23:56:18.554871681Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:56:18.555727 containerd[1624]: time="2026-04-23T23:56:18.555701352Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 23 23:56:18.556260 containerd[1624]: time="2026-04-23T23:56:18.556239102Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 23 23:56:18.556682 containerd[1624]: time="2026-04-23T23:56:18.556656482Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 23 23:56:18.556825 containerd[1624]: time="2026-04-23T23:56:18.556807432Z" level=info msg="metadata content store policy set" policy=shared Apr 23 23:56:18.562751 systemd[1]: issuegen.service: Deactivated successfully. Apr 23 23:56:18.563014 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 23 23:56:18.568566 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 23 23:56:18.584290 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 23 23:56:18.587478 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 23 23:56:18.591191 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593028437Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593231327Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593249367Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593259867Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593325727Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593334107Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593345447Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593545757Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593562477Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593574667Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593585547Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 23 23:56:18.593849 containerd[1624]: time="2026-04-23T23:56:18.593599427Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 23 23:56:18.593045 systemd[1]: Reached target getty.target - Login Prompts. Apr 23 23:56:18.597550 containerd[1624]: time="2026-04-23T23:56:18.597477519Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 23 23:56:18.597550 containerd[1624]: time="2026-04-23T23:56:18.597507779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 23 23:56:18.597581 containerd[1624]: time="2026-04-23T23:56:18.597531609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 23 23:56:18.597596 containerd[1624]: time="2026-04-23T23:56:18.597585619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 23 23:56:18.597647 containerd[1624]: time="2026-04-23T23:56:18.597595729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 23 23:56:18.597647 containerd[1624]: time="2026-04-23T23:56:18.597604409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 23 23:56:18.597647 containerd[1624]: time="2026-04-23T23:56:18.597614359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 23 23:56:18.597647 containerd[1624]: time="2026-04-23T23:56:18.597622479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 23 23:56:18.597647 containerd[1624]: time="2026-04-23T23:56:18.597632229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 23 23:56:18.597759 containerd[1624]: time="2026-04-23T23:56:18.597656839Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 23 23:56:18.598112 containerd[1624]: time="2026-04-23T23:56:18.598085139Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 23 23:56:18.599194 containerd[1624]: time="2026-04-23T23:56:18.598916510Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 23 23:56:18.599194 containerd[1624]: time="2026-04-23T23:56:18.598948390Z" level=info msg="Start snapshots syncer" Apr 23 23:56:18.599194 containerd[1624]: time="2026-04-23T23:56:18.598997520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 23 23:56:18.602781 containerd[1624]: time="2026-04-23T23:56:18.599279350Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 23 23:56:18.602781 containerd[1624]: time="2026-04-23T23:56:18.599316800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 23 23:56:18.602925 containerd[1624]: time="2026-04-23T23:56:18.599375920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 23 23:56:18.603363 containerd[1624]: time="2026-04-23T23:56:18.603312991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 23 23:56:18.603387 containerd[1624]: time="2026-04-23T23:56:18.603368501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 23 23:56:18.603387 containerd[1624]: time="2026-04-23T23:56:18.603380181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603388571Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603404751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603412901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603424761Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603448741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603456921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603481501Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603530591Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603542601Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603550441Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603557271Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603563551Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603613691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 23 23:56:18.603644 containerd[1624]: time="2026-04-23T23:56:18.603627451Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 23 23:56:18.603824 containerd[1624]: time="2026-04-23T23:56:18.603650962Z" level=info msg="runtime interface created" Apr 23 23:56:18.603824 containerd[1624]: time="2026-04-23T23:56:18.603656452Z" level=info msg="created NRI interface" Apr 23 23:56:18.603824 containerd[1624]: time="2026-04-23T23:56:18.603663402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 23 23:56:18.603824 containerd[1624]: time="2026-04-23T23:56:18.603675132Z" level=info msg="Connect containerd service" Apr 23 23:56:18.603824 containerd[1624]: time="2026-04-23T23:56:18.603692042Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 23 23:56:18.608224 containerd[1624]: time="2026-04-23T23:56:18.608102523Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:56:18.616080 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:56:18.618236 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 23 23:56:18.624774 systemd[1]: Finished sshkeys.service. Apr 23 23:56:18.669932 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 23 23:56:18.684023 containerd[1624]: time="2026-04-23T23:56:18.683954415Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 23 23:56:18.684218 containerd[1624]: time="2026-04-23T23:56:18.683967565Z" level=info msg="Start subscribing containerd event" Apr 23 23:56:18.684218 containerd[1624]: time="2026-04-23T23:56:18.684030465Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 23 23:56:18.684343 containerd[1624]: time="2026-04-23T23:56:18.684183095Z" level=info msg="Start recovering state" Apr 23 23:56:18.684363 containerd[1624]: time="2026-04-23T23:56:18.684356425Z" level=info msg="Start event monitor" Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684368275Z" level=info msg="Start cni network conf syncer for default" Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684376485Z" level=info msg="Start streaming server" Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684383855Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684389905Z" level=info msg="runtime interface starting up..." Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684394815Z" level=info msg="starting plugins..." Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684405895Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 23 23:56:18.702583 containerd[1624]: time="2026-04-23T23:56:18.684530895Z" level=info msg="containerd successfully booted in 0.155685s" Apr 23 23:56:18.684639 systemd[1]: Started containerd.service - containerd container runtime. Apr 23 23:56:18.706915 extend-filesystems[1632]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 23 23:56:18.706915 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 23 23:56:18.706915 extend-filesystems[1632]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 23 23:56:18.717333 extend-filesystems[1580]: Resized filesystem in /dev/sda9 Apr 23 23:56:18.708762 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 23 23:56:18.709017 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 23 23:56:18.808627 tar[1607]: linux-amd64/README.md Apr 23 23:56:18.823980 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 23 23:56:19.188210 systemd-networkd[1490]: eth0: Gained IPv6LL Apr 23 23:56:19.190001 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:19.191346 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 23 23:56:19.194122 systemd[1]: Reached target network-online.target - Network is Online. Apr 23 23:56:19.200784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:19.211270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 23 23:56:19.261797 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 23 23:56:19.483704 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 23 23:56:19.489709 systemd[1]: Started sshd@0-135.181.100.135:22-20.229.252.112:58462.service - OpenSSH per-connection server daemon (20.229.252.112:58462). Apr 23 23:56:19.572274 systemd-networkd[1490]: eth1: Gained IPv6LL Apr 23 23:56:19.572830 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:19.691662 sshd[1722]: Accepted publickey for core from 20.229.252.112 port 58462 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:19.695573 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:19.706456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 23 23:56:19.710994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 23 23:56:19.725800 systemd-logind[1590]: New session 1 of user core. Apr 23 23:56:19.734633 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 23 23:56:19.741569 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 23 23:56:19.757729 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 23 23:56:19.760773 systemd-logind[1590]: New session c1 of user core. Apr 23 23:56:19.880402 systemd[1727]: Queued start job for default target default.target. Apr 23 23:56:19.890532 systemd[1727]: Created slice app.slice - User Application Slice. Apr 23 23:56:19.890663 systemd[1727]: Reached target paths.target - Paths. Apr 23 23:56:19.890734 systemd[1727]: Reached target timers.target - Timers. Apr 23 23:56:19.892315 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 23 23:56:19.910094 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 23 23:56:19.910316 systemd[1727]: Reached target sockets.target - Sockets. Apr 23 23:56:19.910455 systemd[1727]: Reached target basic.target - Basic System. Apr 23 23:56:19.910731 systemd[1727]: Reached target default.target - Main User Target. Apr 23 23:56:19.910804 systemd[1727]: Startup finished in 142ms. Apr 23 23:56:19.910929 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 23 23:56:19.919569 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 23 23:56:20.030870 systemd[1]: Started sshd@1-135.181.100.135:22-20.229.252.112:58472.service - OpenSSH per-connection server daemon (20.229.252.112:58472). Apr 23 23:56:20.032660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:20.038930 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 23 23:56:20.039601 systemd[1]: Startup finished in 2.955s (kernel) + 5.708s (initrd) + 4.734s (userspace) = 13.398s. Apr 23 23:56:20.048395 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:56:20.215140 sshd[1742]: Accepted publickey for core from 20.229.252.112 port 58472 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:20.216854 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:20.224980 systemd-logind[1590]: New session 2 of user core. Apr 23 23:56:20.235216 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 23 23:56:20.310937 sshd[1755]: Connection closed by 20.229.252.112 port 58472 Apr 23 23:56:20.312172 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:20.317205 systemd-logind[1590]: Session 2 logged out. Waiting for processes to exit. Apr 23 23:56:20.317753 systemd[1]: sshd@1-135.181.100.135:22-20.229.252.112:58472.service: Deactivated successfully. Apr 23 23:56:20.320010 systemd[1]: session-2.scope: Deactivated successfully. Apr 23 23:56:20.323237 systemd-logind[1590]: Removed session 2. Apr 23 23:56:20.355984 systemd[1]: Started sshd@2-135.181.100.135:22-20.229.252.112:58474.service - OpenSSH per-connection server daemon (20.229.252.112:58474). Apr 23 23:56:20.488079 kubelet[1743]: E0423 23:56:20.488021 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:56:20.491080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:56:20.491257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:56:20.491755 systemd[1]: kubelet.service: Consumed 746ms CPU time, 257.4M memory peak. Apr 23 23:56:20.551979 sshd[1761]: Accepted publickey for core from 20.229.252.112 port 58474 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:20.554447 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:20.563974 systemd-logind[1590]: New session 3 of user core. Apr 23 23:56:20.572138 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 23 23:56:20.645170 sshd[1766]: Connection closed by 20.229.252.112 port 58474 Apr 23 23:56:20.647235 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:20.656127 systemd[1]: sshd@2-135.181.100.135:22-20.229.252.112:58474.service: Deactivated successfully. Apr 23 23:56:20.659762 systemd[1]: session-3.scope: Deactivated successfully. Apr 23 23:56:20.661378 systemd-logind[1590]: Session 3 logged out. Waiting for processes to exit. Apr 23 23:56:20.663772 systemd-logind[1590]: Removed session 3. Apr 23 23:56:20.690593 systemd[1]: Started sshd@3-135.181.100.135:22-20.229.252.112:58490.service - OpenSSH per-connection server daemon (20.229.252.112:58490). Apr 23 23:56:20.881868 sshd[1772]: Accepted publickey for core from 20.229.252.112 port 58490 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:20.885334 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:20.894999 systemd-logind[1590]: New session 4 of user core. Apr 23 23:56:20.905223 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 23 23:56:20.988693 sshd[1775]: Connection closed by 20.229.252.112 port 58490 Apr 23 23:56:20.989675 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:20.996187 systemd[1]: sshd@3-135.181.100.135:22-20.229.252.112:58490.service: Deactivated successfully. Apr 23 23:56:20.999850 systemd[1]: session-4.scope: Deactivated successfully. Apr 23 23:56:21.003805 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Apr 23 23:56:21.006248 systemd-logind[1590]: Removed session 4. Apr 23 23:56:21.031693 systemd[1]: Started sshd@4-135.181.100.135:22-20.229.252.112:58500.service - OpenSSH per-connection server daemon (20.229.252.112:58500). Apr 23 23:56:21.229002 sshd[1781]: Accepted publickey for core from 20.229.252.112 port 58500 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:21.230826 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:21.237963 systemd-logind[1590]: New session 5 of user core. Apr 23 23:56:21.243093 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 23 23:56:21.311785 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 23 23:56:21.312462 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:56:21.329184 sudo[1785]: pam_unix(sudo:session): session closed for user root Apr 23 23:56:21.361028 sshd[1784]: Connection closed by 20.229.252.112 port 58500 Apr 23 23:56:21.362436 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:21.370604 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Apr 23 23:56:21.371042 systemd[1]: sshd@4-135.181.100.135:22-20.229.252.112:58500.service: Deactivated successfully. Apr 23 23:56:21.374636 systemd[1]: session-5.scope: Deactivated successfully. Apr 23 23:56:21.378120 systemd-logind[1590]: Removed session 5. Apr 23 23:56:21.409811 systemd[1]: Started sshd@5-135.181.100.135:22-20.229.252.112:58502.service - OpenSSH per-connection server daemon (20.229.252.112:58502). Apr 23 23:56:21.624347 sshd[1791]: Accepted publickey for core from 20.229.252.112 port 58502 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:21.626595 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:21.635690 systemd-logind[1590]: New session 6 of user core. Apr 23 23:56:21.644335 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 23 23:56:21.700435 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 23 23:56:21.701386 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:56:21.710380 sudo[1796]: pam_unix(sudo:session): session closed for user root Apr 23 23:56:21.723334 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 23 23:56:21.724094 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:56:21.738859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:56:21.797177 augenrules[1818]: No rules Apr 23 23:56:21.799272 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:56:21.799816 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:56:21.801264 sudo[1795]: pam_unix(sudo:session): session closed for user root Apr 23 23:56:21.832257 sshd[1794]: Connection closed by 20.229.252.112 port 58502 Apr 23 23:56:21.834188 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:21.840110 systemd[1]: sshd@5-135.181.100.135:22-20.229.252.112:58502.service: Deactivated successfully. Apr 23 23:56:21.843968 systemd[1]: session-6.scope: Deactivated successfully. Apr 23 23:56:21.846154 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Apr 23 23:56:21.849302 systemd-logind[1590]: Removed session 6. Apr 23 23:56:21.880872 systemd[1]: Started sshd@6-135.181.100.135:22-20.229.252.112:58510.service - OpenSSH per-connection server daemon (20.229.252.112:58510). Apr 23 23:56:22.091614 sshd[1827]: Accepted publickey for core from 20.229.252.112 port 58510 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:56:22.094158 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:56:22.102600 systemd-logind[1590]: New session 7 of user core. Apr 23 23:56:22.112192 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 23 23:56:22.164169 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 23 23:56:22.164782 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:56:22.482769 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 23 23:56:22.495486 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 23 23:56:22.771769 dockerd[1849]: time="2026-04-23T23:56:22.771662067Z" level=info msg="Starting up" Apr 23 23:56:22.773130 dockerd[1849]: time="2026-04-23T23:56:22.773066388Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 23 23:56:22.786345 dockerd[1849]: time="2026-04-23T23:56:22.786306414Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 23 23:56:22.807139 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3241033507-merged.mount: Deactivated successfully. Apr 23 23:56:22.931752 dockerd[1849]: time="2026-04-23T23:56:22.931693564Z" level=info msg="Loading containers: start." Apr 23 23:56:22.942925 kernel: Initializing XFRM netlink socket Apr 23 23:56:23.167696 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:23.177092 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:23.212847 systemd-networkd[1490]: docker0: Link UP Apr 23 23:56:23.213462 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Apr 23 23:56:23.217977 dockerd[1849]: time="2026-04-23T23:56:23.217935633Z" level=info msg="Loading containers: done." Apr 23 23:56:23.234434 dockerd[1849]: time="2026-04-23T23:56:23.234378030Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 23 23:56:23.234590 dockerd[1849]: time="2026-04-23T23:56:23.234465630Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 23 23:56:23.234590 dockerd[1849]: time="2026-04-23T23:56:23.234551910Z" level=info msg="Initializing buildkit" Apr 23 23:56:23.259251 dockerd[1849]: time="2026-04-23T23:56:23.259183810Z" level=info msg="Completed buildkit initialization" Apr 23 23:56:23.264866 dockerd[1849]: time="2026-04-23T23:56:23.264765993Z" level=info msg="Daemon has completed initialization" Apr 23 23:56:23.264866 dockerd[1849]: time="2026-04-23T23:56:23.264827053Z" level=info msg="API listen on /run/docker.sock" Apr 23 23:56:23.265222 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 23 23:56:23.796323 containerd[1624]: time="2026-04-23T23:56:23.796255214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 23 23:56:23.802490 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3580666927-merged.mount: Deactivated successfully. Apr 23 23:56:24.320560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114871204.mount: Deactivated successfully. Apr 23 23:56:25.417526 containerd[1624]: time="2026-04-23T23:56:25.417471239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:25.418699 containerd[1624]: time="2026-04-23T23:56:25.418557730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100614" Apr 23 23:56:25.419604 containerd[1624]: time="2026-04-23T23:56:25.419579690Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:25.421668 containerd[1624]: time="2026-04-23T23:56:25.421641861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:25.422288 containerd[1624]: time="2026-04-23T23:56:25.422258451Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.625950567s" Apr 23 23:56:25.422323 containerd[1624]: time="2026-04-23T23:56:25.422290731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 23 23:56:25.423009 containerd[1624]: time="2026-04-23T23:56:25.422972052Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 23 23:56:26.569225 containerd[1624]: time="2026-04-23T23:56:26.569171549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:26.570422 containerd[1624]: time="2026-04-23T23:56:26.570257319Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252760" Apr 23 23:56:26.571332 containerd[1624]: time="2026-04-23T23:56:26.571310470Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:26.573273 containerd[1624]: time="2026-04-23T23:56:26.573247761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:26.573927 containerd[1624]: time="2026-04-23T23:56:26.573897601Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.150903009s" Apr 23 23:56:26.573980 containerd[1624]: time="2026-04-23T23:56:26.573931601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 23 23:56:26.574570 containerd[1624]: time="2026-04-23T23:56:26.574404041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 23 23:56:27.620921 containerd[1624]: time="2026-04-23T23:56:27.620854717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:27.622003 containerd[1624]: time="2026-04-23T23:56:27.621796437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810913" Apr 23 23:56:27.622871 containerd[1624]: time="2026-04-23T23:56:27.622844368Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:27.625614 containerd[1624]: time="2026-04-23T23:56:27.625579529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:27.626510 containerd[1624]: time="2026-04-23T23:56:27.626485469Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.052060608s" Apr 23 23:56:27.626608 containerd[1624]: time="2026-04-23T23:56:27.626593039Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 23 23:56:27.627194 containerd[1624]: time="2026-04-23T23:56:27.627155260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 23 23:56:28.663210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205045146.mount: Deactivated successfully. Apr 23 23:56:28.896926 containerd[1624]: time="2026-04-23T23:56:28.896848518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:28.898273 containerd[1624]: time="2026-04-23T23:56:28.898223129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972982" Apr 23 23:56:28.899408 containerd[1624]: time="2026-04-23T23:56:28.899332670Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:28.901759 containerd[1624]: time="2026-04-23T23:56:28.901714371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:28.902326 containerd[1624]: time="2026-04-23T23:56:28.902168811Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.274992691s" Apr 23 23:56:28.902326 containerd[1624]: time="2026-04-23T23:56:28.902205021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 23 23:56:28.902660 containerd[1624]: time="2026-04-23T23:56:28.902625771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 23 23:56:29.494321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764667065.mount: Deactivated successfully. Apr 23 23:56:30.374305 containerd[1624]: time="2026-04-23T23:56:30.373655184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.374305 containerd[1624]: time="2026-04-23T23:56:30.374281794Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Apr 23 23:56:30.374806 containerd[1624]: time="2026-04-23T23:56:30.374788634Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.376480 containerd[1624]: time="2026-04-23T23:56:30.376456685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.377078 containerd[1624]: time="2026-04-23T23:56:30.377055585Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.474400244s" Apr 23 23:56:30.377130 containerd[1624]: time="2026-04-23T23:56:30.377078895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 23 23:56:30.377412 containerd[1624]: time="2026-04-23T23:56:30.377396705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 23 23:56:30.742228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 23 23:56:30.745673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:30.887451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933914898.mount: Deactivated successfully. Apr 23 23:56:30.900784 containerd[1624]: time="2026-04-23T23:56:30.900263473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.901658 containerd[1624]: time="2026-04-23T23:56:30.901640143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 23 23:56:30.902596 containerd[1624]: time="2026-04-23T23:56:30.902559594Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.905030 containerd[1624]: time="2026-04-23T23:56:30.904983985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:30.905708 containerd[1624]: time="2026-04-23T23:56:30.905571855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 528.15621ms" Apr 23 23:56:30.905708 containerd[1624]: time="2026-04-23T23:56:30.905591575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 23 23:56:30.905999 containerd[1624]: time="2026-04-23T23:56:30.905990515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 23 23:56:30.927302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:30.938210 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:56:30.971852 kubelet[2199]: E0423 23:56:30.971799 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:56:30.976096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:56:30.976541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:56:30.977025 systemd[1]: kubelet.service: Consumed 182ms CPU time, 110.5M memory peak. Apr 23 23:56:31.380546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763237313.mount: Deactivated successfully. Apr 23 23:56:32.315231 containerd[1624]: time="2026-04-23T23:56:32.315173732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:32.316195 containerd[1624]: time="2026-04-23T23:56:32.316111683Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874917" Apr 23 23:56:32.317301 containerd[1624]: time="2026-04-23T23:56:32.317277623Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:32.319826 containerd[1624]: time="2026-04-23T23:56:32.319781934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:32.320634 containerd[1624]: time="2026-04-23T23:56:32.320591594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.414585219s" Apr 23 23:56:32.320634 containerd[1624]: time="2026-04-23T23:56:32.320627434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 23 23:56:34.725281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:34.725557 systemd[1]: kubelet.service: Consumed 182ms CPU time, 110.5M memory peak. Apr 23 23:56:34.731117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:34.769030 systemd[1]: Reload requested from client PID 2295 ('systemctl') (unit session-7.scope)... Apr 23 23:56:34.769169 systemd[1]: Reloading... Apr 23 23:56:34.883946 zram_generator::config[2345]: No configuration found. Apr 23 23:56:35.061757 systemd[1]: Reloading finished in 292 ms. Apr 23 23:56:35.097769 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 23 23:56:35.097852 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 23 23:56:35.098471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:35.098527 systemd[1]: kubelet.service: Consumed 104ms CPU time, 98.3M memory peak. Apr 23 23:56:35.102058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:35.264809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:35.270568 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:56:35.312519 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:56:35.313565 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:56:35.313565 kubelet[2391]: I0423 23:56:35.313056 2391 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:56:35.539588 kubelet[2391]: I0423 23:56:35.539557 2391 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 23 23:56:35.539713 kubelet[2391]: I0423 23:56:35.539705 2391 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:56:35.541231 kubelet[2391]: I0423 23:56:35.541219 2391 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 23 23:56:35.541291 kubelet[2391]: I0423 23:56:35.541282 2391 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:56:35.541526 kubelet[2391]: I0423 23:56:35.541517 2391 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:56:35.550728 kubelet[2391]: I0423 23:56:35.550714 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:56:35.551306 kubelet[2391]: E0423 23:56:35.550923 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://135.181.100.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 23 23:56:35.554941 kubelet[2391]: I0423 23:56:35.554079 2391 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:56:35.557486 kubelet[2391]: I0423 23:56:35.557474 2391 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 23 23:56:35.558430 kubelet[2391]: I0423 23:56:35.558399 2391 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:56:35.558594 kubelet[2391]: I0423 23:56:35.558484 2391 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-19674e2966","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:56:35.558690 kubelet[2391]: I0423 23:56:35.558680 2391 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:56:35.558732 kubelet[2391]: I0423 23:56:35.558726 2391 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 23:56:35.558830 kubelet[2391]: I0423 23:56:35.558822 2391 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 23 23:56:35.560625 kubelet[2391]: I0423 23:56:35.560613 2391 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:56:35.560811 kubelet[2391]: I0423 23:56:35.560803 2391 kubelet.go:475] "Attempting to sync node with API server" Apr 23 23:56:35.560874 kubelet[2391]: I0423 23:56:35.560868 2391 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:56:35.560951 kubelet[2391]: I0423 23:56:35.560945 2391 kubelet.go:387] "Adding apiserver pod source" Apr 23 23:56:35.560990 kubelet[2391]: I0423 23:56:35.560985 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:56:35.561751 kubelet[2391]: E0423 23:56:35.561683 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.100.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-19674e2966&limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:56:35.564207 kubelet[2391]: I0423 23:56:35.563152 2391 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:56:35.564207 kubelet[2391]: I0423 23:56:35.563497 2391 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:56:35.564207 kubelet[2391]: I0423 23:56:35.563526 2391 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 23 23:56:35.564207 kubelet[2391]: W0423 23:56:35.563560 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 23 23:56:35.568344 kubelet[2391]: I0423 23:56:35.568333 2391 server.go:1262] "Started kubelet" Apr 23 23:56:35.568615 kubelet[2391]: E0423 23:56:35.568599 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.100.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:56:35.568985 kubelet[2391]: I0423 23:56:35.568945 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:56:35.573902 kubelet[2391]: I0423 23:56:35.573262 2391 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:56:35.573902 kubelet[2391]: I0423 23:56:35.573308 2391 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 23 23:56:35.573902 kubelet[2391]: I0423 23:56:35.573539 2391 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:56:35.575906 kubelet[2391]: I0423 23:56:35.575858 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:56:35.578078 kubelet[2391]: E0423 23:56:35.577058 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.100.135:6443/api/v1/namespaces/default/events\": dial tcp 135.181.100.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-19674e2966.18a921b678c5059d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-19674e2966,UID:ci-4459-2-4-n-19674e2966,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-19674e2966,},FirstTimestamp:2026-04-23 23:56:35.568313757 +0000 UTC m=+0.293573873,LastTimestamp:2026-04-23 23:56:35.568313757 +0000 UTC m=+0.293573873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-19674e2966,}" Apr 23 23:56:35.579402 kubelet[2391]: I0423 23:56:35.579388 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:56:35.580644 kubelet[2391]: I0423 23:56:35.580634 2391 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 23 23:56:35.580868 kubelet[2391]: E0423 23:56:35.580850 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:35.582285 kubelet[2391]: I0423 23:56:35.581347 2391 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 23 23:56:35.582285 kubelet[2391]: I0423 23:56:35.581377 2391 reconciler.go:29] "Reconciler: start to sync state" Apr 23 23:56:35.582285 kubelet[2391]: I0423 23:56:35.581740 2391 server.go:310] "Adding debug handlers to kubelet server" Apr 23 23:56:35.583959 kubelet[2391]: E0423 23:56:35.583944 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.100.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:56:35.584080 kubelet[2391]: E0423 23:56:35.584038 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": dial tcp 135.181.100.135:6443: connect: connection refused" interval="200ms" Apr 23 23:56:35.585561 kubelet[2391]: I0423 23:56:35.585551 2391 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:56:35.585601 kubelet[2391]: I0423 23:56:35.585596 2391 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:56:35.585678 kubelet[2391]: I0423 23:56:35.585668 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:56:35.605643 kubelet[2391]: I0423 23:56:35.605576 2391 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 23 23:56:35.606779 kubelet[2391]: I0423 23:56:35.606747 2391 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 23 23:56:35.606779 kubelet[2391]: I0423 23:56:35.606772 2391 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 23 23:56:35.606834 kubelet[2391]: I0423 23:56:35.606806 2391 kubelet.go:2428] "Starting kubelet main sync loop" Apr 23 23:56:35.606944 kubelet[2391]: E0423 23:56:35.606874 2391 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:56:35.612005 kubelet[2391]: E0423 23:56:35.611826 2391 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:56:35.614784 kubelet[2391]: E0423 23:56:35.614758 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.100.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 23:56:35.616551 kubelet[2391]: I0423 23:56:35.616387 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:56:35.616551 kubelet[2391]: I0423 23:56:35.616396 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:56:35.616551 kubelet[2391]: I0423 23:56:35.616408 2391 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:56:35.618822 kubelet[2391]: I0423 23:56:35.618810 2391 policy_none.go:49] "None policy: Start" Apr 23 23:56:35.618875 kubelet[2391]: I0423 23:56:35.618869 2391 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 23 23:56:35.618974 kubelet[2391]: I0423 23:56:35.618967 2391 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 23 23:56:35.620165 kubelet[2391]: I0423 23:56:35.620155 2391 policy_none.go:47] "Start" Apr 23 23:56:35.623967 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 23 23:56:35.634353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 23 23:56:35.637840 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 23 23:56:35.646244 kubelet[2391]: E0423 23:56:35.646215 2391 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:56:35.646843 kubelet[2391]: I0423 23:56:35.646822 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:56:35.647185 kubelet[2391]: I0423 23:56:35.646844 2391 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:56:35.648280 kubelet[2391]: I0423 23:56:35.648075 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:56:35.648765 kubelet[2391]: E0423 23:56:35.648650 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:56:35.648832 kubelet[2391]: E0423 23:56:35.648824 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:35.729180 systemd[1]: Created slice kubepods-burstable-pod1e0adc82c107c022abf0e895351d25c4.slice - libcontainer container kubepods-burstable-pod1e0adc82c107c022abf0e895351d25c4.slice. Apr 23 23:56:35.750981 kubelet[2391]: I0423 23:56:35.750042 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.750981 kubelet[2391]: E0423 23:56:35.750683 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.751361 kubelet[2391]: E0423 23:56:35.751268 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.100.135:6443/api/v1/nodes\": dial tcp 135.181.100.135:6443: connect: connection refused" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.757262 systemd[1]: Created slice kubepods-burstable-pod507a7d4e90421d912f76ae68126d2f3f.slice - libcontainer container kubepods-burstable-pod507a7d4e90421d912f76ae68126d2f3f.slice. Apr 23 23:56:35.762726 kubelet[2391]: E0423 23:56:35.762666 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.767742 systemd[1]: Created slice kubepods-burstable-pode97aae3382cc12f625cd62065e8560d0.slice - libcontainer container kubepods-burstable-pode97aae3382cc12f625cd62065e8560d0.slice. Apr 23 23:56:35.771166 kubelet[2391]: E0423 23:56:35.771130 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783392 kubelet[2391]: I0423 23:56:35.783316 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783392 kubelet[2391]: I0423 23:56:35.783357 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783392 kubelet[2391]: I0423 23:56:35.783389 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783641 kubelet[2391]: I0423 23:56:35.783411 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783641 kubelet[2391]: I0423 23:56:35.783433 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783641 kubelet[2391]: I0423 23:56:35.783453 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783641 kubelet[2391]: I0423 23:56:35.783475 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783641 kubelet[2391]: I0423 23:56:35.783498 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.783844 kubelet[2391]: I0423 23:56:35.783521 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e97aae3382cc12f625cd62065e8560d0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-19674e2966\" (UID: \"e97aae3382cc12f625cd62065e8560d0\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.785075 kubelet[2391]: E0423 23:56:35.784961 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": dial tcp 135.181.100.135:6443: connect: connection refused" interval="400ms" Apr 23 23:56:35.956022 kubelet[2391]: I0423 23:56:35.955414 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:35.956022 kubelet[2391]: E0423 23:56:35.955824 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.100.135:6443/api/v1/nodes\": dial tcp 135.181.100.135:6443: connect: connection refused" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:36.055857 containerd[1624]: time="2026-04-23T23:56:36.055750270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-19674e2966,Uid:1e0adc82c107c022abf0e895351d25c4,Namespace:kube-system,Attempt:0,}" Apr 23 23:56:36.066095 containerd[1624]: time="2026-04-23T23:56:36.066026084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-19674e2966,Uid:507a7d4e90421d912f76ae68126d2f3f,Namespace:kube-system,Attempt:0,}" Apr 23 23:56:36.075010 containerd[1624]: time="2026-04-23T23:56:36.074955048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-19674e2966,Uid:e97aae3382cc12f625cd62065e8560d0,Namespace:kube-system,Attempt:0,}" Apr 23 23:56:36.186145 kubelet[2391]: E0423 23:56:36.186066 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": dial tcp 135.181.100.135:6443: connect: connection refused" interval="800ms" Apr 23 23:56:36.359545 kubelet[2391]: I0423 23:56:36.359070 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:36.360437 kubelet[2391]: E0423 23:56:36.359870 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.100.135:6443/api/v1/nodes\": dial tcp 135.181.100.135:6443: connect: connection refused" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:36.448661 kubelet[2391]: E0423 23:56:36.448587 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.100.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:56:36.549960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726962237.mount: Deactivated successfully. Apr 23 23:56:36.556372 containerd[1624]: time="2026-04-23T23:56:36.556304839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:56:36.558391 containerd[1624]: time="2026-04-23T23:56:36.558340759Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:56:36.560445 containerd[1624]: time="2026-04-23T23:56:36.560400470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 23 23:56:36.561303 containerd[1624]: time="2026-04-23T23:56:36.561252981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 23 23:56:36.564933 containerd[1624]: time="2026-04-23T23:56:36.563855722Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:56:36.565331 containerd[1624]: time="2026-04-23T23:56:36.565296782Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:56:36.565509 containerd[1624]: time="2026-04-23T23:56:36.565463002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 23 23:56:36.570556 containerd[1624]: time="2026-04-23T23:56:36.570493104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:56:36.573004 containerd[1624]: time="2026-04-23T23:56:36.572944345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 496.358276ms" Apr 23 23:56:36.575701 containerd[1624]: time="2026-04-23T23:56:36.575384177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.680492ms" Apr 23 23:56:36.576460 containerd[1624]: time="2026-04-23T23:56:36.576361387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 517.933866ms" Apr 23 23:56:36.609432 containerd[1624]: time="2026-04-23T23:56:36.609375261Z" level=info msg="connecting to shim 09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d" address="unix:///run/containerd/s/9215d84004655873ab6fedd0dc7c0bcbdf056c24fce3055afef9272324e95604" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:36.630703 containerd[1624]: time="2026-04-23T23:56:36.630540079Z" level=info msg="connecting to shim 11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787" address="unix:///run/containerd/s/fcbc11bd2c0111f7394e7ff627dd451dbae91a9c01cab6d9fafc4c55f40c8ed1" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:36.633299 containerd[1624]: time="2026-04-23T23:56:36.633266611Z" level=info msg="connecting to shim c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5" address="unix:///run/containerd/s/6565ef4ebe5e1b525765e1c2fff56899c8f13316ebe8d851482ecc12bee2f62f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:36.648214 systemd[1]: Started cri-containerd-09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d.scope - libcontainer container 09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d. Apr 23 23:56:36.661028 systemd[1]: Started cri-containerd-c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5.scope - libcontainer container c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5. Apr 23 23:56:36.666009 systemd[1]: Started cri-containerd-11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787.scope - libcontainer container 11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787. Apr 23 23:56:36.730040 containerd[1624]: time="2026-04-23T23:56:36.729959461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-19674e2966,Uid:e97aae3382cc12f625cd62065e8560d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d\"" Apr 23 23:56:36.734494 containerd[1624]: time="2026-04-23T23:56:36.734430503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-19674e2966,Uid:507a7d4e90421d912f76ae68126d2f3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787\"" Apr 23 23:56:36.735664 containerd[1624]: time="2026-04-23T23:56:36.735584073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-19674e2966,Uid:1e0adc82c107c022abf0e895351d25c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5\"" Apr 23 23:56:36.738062 containerd[1624]: time="2026-04-23T23:56:36.738042754Z" level=info msg="CreateContainer within sandbox \"09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 23 23:56:36.742517 containerd[1624]: time="2026-04-23T23:56:36.742144046Z" level=info msg="CreateContainer within sandbox \"11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 23 23:56:36.745011 containerd[1624]: time="2026-04-23T23:56:36.744988867Z" level=info msg="CreateContainer within sandbox \"c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 23 23:56:36.747036 containerd[1624]: time="2026-04-23T23:56:36.747016558Z" level=info msg="Container 1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:36.757420 containerd[1624]: time="2026-04-23T23:56:36.757370332Z" level=info msg="Container 7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:36.760550 containerd[1624]: time="2026-04-23T23:56:36.760198313Z" level=info msg="Container 79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:36.762862 containerd[1624]: time="2026-04-23T23:56:36.762839415Z" level=info msg="CreateContainer within sandbox \"09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe\"" Apr 23 23:56:36.763554 containerd[1624]: time="2026-04-23T23:56:36.763454875Z" level=info msg="StartContainer for \"1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe\"" Apr 23 23:56:36.764450 containerd[1624]: time="2026-04-23T23:56:36.764426585Z" level=info msg="connecting to shim 1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe" address="unix:///run/containerd/s/9215d84004655873ab6fedd0dc7c0bcbdf056c24fce3055afef9272324e95604" protocol=ttrpc version=3 Apr 23 23:56:36.769989 containerd[1624]: time="2026-04-23T23:56:36.769959938Z" level=info msg="CreateContainer within sandbox \"11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08\"" Apr 23 23:56:36.770557 containerd[1624]: time="2026-04-23T23:56:36.770541688Z" level=info msg="StartContainer for \"7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08\"" Apr 23 23:56:36.771518 containerd[1624]: time="2026-04-23T23:56:36.771289058Z" level=info msg="connecting to shim 7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08" address="unix:///run/containerd/s/fcbc11bd2c0111f7394e7ff627dd451dbae91a9c01cab6d9fafc4c55f40c8ed1" protocol=ttrpc version=3 Apr 23 23:56:36.774148 containerd[1624]: time="2026-04-23T23:56:36.774114849Z" level=info msg="CreateContainer within sandbox \"c76db75ec23ea80be2b0c137abf3a2496809767423bde7ccded356c9eb6e6fe5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888\"" Apr 23 23:56:36.774624 containerd[1624]: time="2026-04-23T23:56:36.774611779Z" level=info msg="StartContainer for \"79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888\"" Apr 23 23:56:36.777394 containerd[1624]: time="2026-04-23T23:56:36.777372031Z" level=info msg="connecting to shim 79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888" address="unix:///run/containerd/s/6565ef4ebe5e1b525765e1c2fff56899c8f13316ebe8d851482ecc12bee2f62f" protocol=ttrpc version=3 Apr 23 23:56:36.789177 systemd[1]: Started cri-containerd-1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe.scope - libcontainer container 1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe. Apr 23 23:56:36.796103 kubelet[2391]: E0423 23:56:36.796066 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.100.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-19674e2966&limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:56:36.797126 systemd[1]: Started cri-containerd-79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888.scope - libcontainer container 79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888. Apr 23 23:56:36.801554 systemd[1]: Started cri-containerd-7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08.scope - libcontainer container 7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08. Apr 23 23:56:36.858324 containerd[1624]: time="2026-04-23T23:56:36.858295234Z" level=info msg="StartContainer for \"1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe\" returns successfully" Apr 23 23:56:36.861237 kubelet[2391]: E0423 23:56:36.861208 2391 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.100.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.100.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:56:36.868441 containerd[1624]: time="2026-04-23T23:56:36.868333739Z" level=info msg="StartContainer for \"79acf0c38d597b518aa8714d344a82fccf86e364593dac2ac3756b535a132888\" returns successfully" Apr 23 23:56:36.889996 containerd[1624]: time="2026-04-23T23:56:36.889619687Z" level=info msg="StartContainer for \"7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08\" returns successfully" Apr 23 23:56:37.162902 kubelet[2391]: I0423 23:56:37.162331 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.630640 kubelet[2391]: E0423 23:56:37.630514 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.634026 kubelet[2391]: E0423 23:56:37.633984 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.635641 kubelet[2391]: E0423 23:56:37.635628 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.673321 kubelet[2391]: E0423 23:56:37.673285 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.753719 kubelet[2391]: I0423 23:56:37.753683 2391 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:37.753719 kubelet[2391]: E0423 23:56:37.753713 2391 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-19674e2966\": node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:37.762821 kubelet[2391]: E0423 23:56:37.762788 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:37.863792 kubelet[2391]: E0423 23:56:37.863715 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:37.965087 kubelet[2391]: E0423 23:56:37.964925 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.065815 kubelet[2391]: E0423 23:56:38.065743 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.167097 kubelet[2391]: E0423 23:56:38.167031 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.267970 kubelet[2391]: E0423 23:56:38.267774 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.368936 kubelet[2391]: E0423 23:56:38.368847 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.470001 kubelet[2391]: E0423 23:56:38.469941 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.570661 kubelet[2391]: E0423 23:56:38.570493 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.640716 kubelet[2391]: E0423 23:56:38.640671 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:38.641610 kubelet[2391]: E0423 23:56:38.641573 2391 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-19674e2966\" not found" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:38.671420 kubelet[2391]: E0423 23:56:38.671379 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.771535 kubelet[2391]: E0423 23:56:38.771482 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.872069 kubelet[2391]: E0423 23:56:38.871954 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:38.972640 kubelet[2391]: E0423 23:56:38.972604 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:39.073546 kubelet[2391]: E0423 23:56:39.073507 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:39.174243 kubelet[2391]: E0423 23:56:39.174007 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:39.274707 kubelet[2391]: E0423 23:56:39.274646 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:39.375287 kubelet[2391]: E0423 23:56:39.375228 2391 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-19674e2966\" not found" Apr 23 23:56:39.483046 kubelet[2391]: I0423 23:56:39.482374 2391 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" Apr 23 23:56:39.493199 kubelet[2391]: I0423 23:56:39.493153 2391 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:39.497788 kubelet[2391]: I0423 23:56:39.497538 2391 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:39.571424 kubelet[2391]: I0423 23:56:39.571347 2391 apiserver.go:52] "Watching apiserver" Apr 23 23:56:39.583537 kubelet[2391]: I0423 23:56:39.583447 2391 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 23 23:56:39.744327 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-7.scope)... Apr 23 23:56:39.744356 systemd[1]: Reloading... Apr 23 23:56:39.888903 zram_generator::config[2726]: No configuration found. Apr 23 23:56:40.083561 systemd[1]: Reloading finished in 338 ms. Apr 23 23:56:40.108227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:40.127973 systemd[1]: kubelet.service: Deactivated successfully. Apr 23 23:56:40.128230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:40.128286 systemd[1]: kubelet.service: Consumed 673ms CPU time, 124.9M memory peak. Apr 23 23:56:40.130298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:56:40.281688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:56:40.296616 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:56:40.335128 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:56:40.335128 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:56:40.335428 kubelet[2771]: I0423 23:56:40.335356 2771 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:56:40.341400 kubelet[2771]: I0423 23:56:40.341352 2771 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 23 23:56:40.341400 kubelet[2771]: I0423 23:56:40.341375 2771 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:56:40.341400 kubelet[2771]: I0423 23:56:40.341399 2771 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 23 23:56:40.341400 kubelet[2771]: I0423 23:56:40.341408 2771 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:56:40.341616 kubelet[2771]: I0423 23:56:40.341596 2771 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:56:40.344909 kubelet[2771]: I0423 23:56:40.343105 2771 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 23 23:56:40.346123 kubelet[2771]: I0423 23:56:40.346101 2771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:56:40.348079 kubelet[2771]: I0423 23:56:40.348050 2771 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:56:40.351310 kubelet[2771]: I0423 23:56:40.351265 2771 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 23 23:56:40.351524 kubelet[2771]: I0423 23:56:40.351493 2771 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:56:40.351634 kubelet[2771]: I0423 23:56:40.351519 2771 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-19674e2966","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:56:40.351634 kubelet[2771]: I0423 23:56:40.351631 2771 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:56:40.351712 kubelet[2771]: I0423 23:56:40.351639 2771 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 23:56:40.351712 kubelet[2771]: I0423 23:56:40.351658 2771 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 23 23:56:40.351933 kubelet[2771]: I0423 23:56:40.351835 2771 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:56:40.352196 kubelet[2771]: I0423 23:56:40.352157 2771 kubelet.go:475] "Attempting to sync node with API server" Apr 23 23:56:40.352196 kubelet[2771]: I0423 23:56:40.352174 2771 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:56:40.352247 kubelet[2771]: I0423 23:56:40.352203 2771 kubelet.go:387] "Adding apiserver pod source" Apr 23 23:56:40.352247 kubelet[2771]: I0423 23:56:40.352218 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:56:40.355399 kubelet[2771]: I0423 23:56:40.355347 2771 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:56:40.355937 kubelet[2771]: I0423 23:56:40.355861 2771 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:56:40.355982 kubelet[2771]: I0423 23:56:40.355952 2771 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 23 23:56:40.360592 kubelet[2771]: I0423 23:56:40.360565 2771 server.go:1262] "Started kubelet" Apr 23 23:56:40.367054 kubelet[2771]: I0423 23:56:40.366940 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:56:40.372720 kubelet[2771]: I0423 23:56:40.372623 2771 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:56:40.376940 kubelet[2771]: I0423 23:56:40.376270 2771 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:56:40.376940 kubelet[2771]: I0423 23:56:40.376368 2771 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 23 23:56:40.376940 kubelet[2771]: I0423 23:56:40.376723 2771 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:56:40.378462 kubelet[2771]: E0423 23:56:40.378446 2771 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:56:40.380816 kubelet[2771]: I0423 23:56:40.380787 2771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:56:40.381563 kubelet[2771]: I0423 23:56:40.381551 2771 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 23 23:56:40.382992 kubelet[2771]: I0423 23:56:40.382972 2771 server.go:310] "Adding debug handlers to kubelet server" Apr 23 23:56:40.383865 kubelet[2771]: I0423 23:56:40.383828 2771 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 23 23:56:40.383973 kubelet[2771]: I0423 23:56:40.383959 2771 reconciler.go:29] "Reconciler: start to sync state" Apr 23 23:56:40.389468 kubelet[2771]: I0423 23:56:40.389445 2771 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:56:40.389680 kubelet[2771]: I0423 23:56:40.389667 2771 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:56:40.389800 kubelet[2771]: I0423 23:56:40.389776 2771 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 23 23:56:40.390759 kubelet[2771]: I0423 23:56:40.390741 2771 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 23 23:56:40.390759 kubelet[2771]: I0423 23:56:40.390759 2771 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 23 23:56:40.390828 kubelet[2771]: I0423 23:56:40.390780 2771 kubelet.go:2428] "Starting kubelet main sync loop" Apr 23 23:56:40.390828 kubelet[2771]: E0423 23:56:40.390810 2771 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:56:40.392256 kubelet[2771]: I0423 23:56:40.391155 2771 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:56:40.434917 kubelet[2771]: I0423 23:56:40.434864 2771 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:56:40.434917 kubelet[2771]: I0423 23:56:40.434899 2771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:56:40.434917 kubelet[2771]: I0423 23:56:40.434929 2771 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:56:40.435074 kubelet[2771]: I0423 23:56:40.435028 2771 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 23 23:56:40.435074 kubelet[2771]: I0423 23:56:40.435035 2771 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 23 23:56:40.435074 kubelet[2771]: I0423 23:56:40.435048 2771 policy_none.go:49] "None policy: Start" Apr 23 23:56:40.435074 kubelet[2771]: I0423 23:56:40.435055 2771 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 23 23:56:40.435074 kubelet[2771]: I0423 23:56:40.435063 2771 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 23 23:56:40.435151 kubelet[2771]: I0423 23:56:40.435119 2771 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 23 23:56:40.435151 kubelet[2771]: I0423 23:56:40.435124 2771 policy_none.go:47] "Start" Apr 23 23:56:40.439568 kubelet[2771]: E0423 23:56:40.439338 2771 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:56:40.439568 kubelet[2771]: I0423 23:56:40.439497 2771 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:56:40.439568 kubelet[2771]: I0423 23:56:40.439506 2771 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:56:40.439962 kubelet[2771]: I0423 23:56:40.439943 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:56:40.444859 kubelet[2771]: E0423 23:56:40.444831 2771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:56:40.492450 kubelet[2771]: I0423 23:56:40.492063 2771 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.492450 kubelet[2771]: I0423 23:56:40.492187 2771 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.492450 kubelet[2771]: I0423 23:56:40.492074 2771 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.499124 kubelet[2771]: E0423 23:56:40.499087 2771 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-19674e2966\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.501720 kubelet[2771]: E0423 23:56:40.501667 2771 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-19674e2966\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.501831 kubelet[2771]: E0423 23:56:40.501668 2771 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.547950 kubelet[2771]: I0423 23:56:40.547875 2771 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.557061 kubelet[2771]: I0423 23:56:40.556933 2771 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.557061 kubelet[2771]: I0423 23:56:40.557068 2771 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.584520 kubelet[2771]: I0423 23:56:40.584459 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.685681 kubelet[2771]: I0423 23:56:40.685518 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.685681 kubelet[2771]: I0423 23:56:40.685589 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.685681 kubelet[2771]: I0423 23:56:40.685633 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.685681 kubelet[2771]: I0423 23:56:40.685657 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.685681 kubelet[2771]: I0423 23:56:40.685684 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.686004 kubelet[2771]: I0423 23:56:40.685709 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e97aae3382cc12f625cd62065e8560d0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-19674e2966\" (UID: \"e97aae3382cc12f625cd62065e8560d0\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.686004 kubelet[2771]: I0423 23:56:40.685755 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e0adc82c107c022abf0e895351d25c4-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-19674e2966\" (UID: \"1e0adc82c107c022abf0e895351d25c4\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:40.686004 kubelet[2771]: I0423 23:56:40.685779 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/507a7d4e90421d912f76ae68126d2f3f-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-19674e2966\" (UID: \"507a7d4e90421d912f76ae68126d2f3f\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" Apr 23 23:56:41.353543 kubelet[2771]: I0423 23:56:41.353229 2771 apiserver.go:52] "Watching apiserver" Apr 23 23:56:41.383997 kubelet[2771]: I0423 23:56:41.383947 2771 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 23 23:56:41.424342 kubelet[2771]: I0423 23:56:41.424069 2771 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:41.431365 kubelet[2771]: E0423 23:56:41.431339 2771 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-19674e2966\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" Apr 23 23:56:41.451540 kubelet[2771]: I0423 23:56:41.451264 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-19674e2966" podStartSLOduration=2.451252347 podStartE2EDuration="2.451252347s" podCreationTimestamp="2026-04-23 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:56:41.445099205 +0000 UTC m=+1.144679188" watchObservedRunningTime="2026-04-23 23:56:41.451252347 +0000 UTC m=+1.150832330" Apr 23 23:56:41.458685 kubelet[2771]: I0423 23:56:41.458489 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-19674e2966" podStartSLOduration=2.45848089 podStartE2EDuration="2.45848089s" podCreationTimestamp="2026-04-23 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:56:41.451791737 +0000 UTC m=+1.151371720" watchObservedRunningTime="2026-04-23 23:56:41.45848089 +0000 UTC m=+1.158060873" Apr 23 23:56:41.468575 kubelet[2771]: I0423 23:56:41.468455 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-19674e2966" podStartSLOduration=2.468445464 podStartE2EDuration="2.468445464s" podCreationTimestamp="2026-04-23 23:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:56:41.459157511 +0000 UTC m=+1.158737504" watchObservedRunningTime="2026-04-23 23:56:41.468445464 +0000 UTC m=+1.168025447" Apr 23 23:56:45.038016 kubelet[2771]: I0423 23:56:45.037978 2771 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 23 23:56:45.038772 containerd[1624]: time="2026-04-23T23:56:45.038736271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 23 23:56:45.039197 kubelet[2771]: I0423 23:56:45.039132 2771 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 23 23:56:46.053842 systemd[1]: Created slice kubepods-besteffort-pod2ccf3b0f_34c9_4bb0_b73f_93bcffbb9089.slice - libcontainer container kubepods-besteffort-pod2ccf3b0f_34c9_4bb0_b73f_93bcffbb9089.slice. Apr 23 23:56:46.121483 kubelet[2771]: I0423 23:56:46.121413 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089-xtables-lock\") pod \"kube-proxy-9t9jj\" (UID: \"2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089\") " pod="kube-system/kube-proxy-9t9jj" Apr 23 23:56:46.121483 kubelet[2771]: I0423 23:56:46.121471 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089-lib-modules\") pod \"kube-proxy-9t9jj\" (UID: \"2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089\") " pod="kube-system/kube-proxy-9t9jj" Apr 23 23:56:46.121483 kubelet[2771]: I0423 23:56:46.121499 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089-kube-proxy\") pod \"kube-proxy-9t9jj\" (UID: \"2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089\") " pod="kube-system/kube-proxy-9t9jj" Apr 23 23:56:46.122272 kubelet[2771]: I0423 23:56:46.121552 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqsh\" (UniqueName: \"kubernetes.io/projected/2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089-kube-api-access-bvqsh\") pod \"kube-proxy-9t9jj\" (UID: \"2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089\") " pod="kube-system/kube-proxy-9t9jj" Apr 23 23:56:46.285913 systemd[1]: Created slice kubepods-besteffort-pod1c50a3df_997c_4e82_b06b_cd945cbcc1a7.slice - libcontainer container kubepods-besteffort-pod1c50a3df_997c_4e82_b06b_cd945cbcc1a7.slice. Apr 23 23:56:46.323715 kubelet[2771]: I0423 23:56:46.323555 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvzp7\" (UniqueName: \"kubernetes.io/projected/1c50a3df-997c-4e82-b06b-cd945cbcc1a7-kube-api-access-rvzp7\") pod \"tigera-operator-6fb8d665dd-b8z89\" (UID: \"1c50a3df-997c-4e82-b06b-cd945cbcc1a7\") " pod="tigera-operator/tigera-operator-6fb8d665dd-b8z89" Apr 23 23:56:46.323715 kubelet[2771]: I0423 23:56:46.323615 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c50a3df-997c-4e82-b06b-cd945cbcc1a7-var-lib-calico\") pod \"tigera-operator-6fb8d665dd-b8z89\" (UID: \"1c50a3df-997c-4e82-b06b-cd945cbcc1a7\") " pod="tigera-operator/tigera-operator-6fb8d665dd-b8z89" Apr 23 23:56:46.365804 containerd[1624]: time="2026-04-23T23:56:46.365721184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9t9jj,Uid:2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089,Namespace:kube-system,Attempt:0,}" Apr 23 23:56:46.404055 containerd[1624]: time="2026-04-23T23:56:46.403983700Z" level=info msg="connecting to shim c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f" address="unix:///run/containerd/s/a07430b47dcf3928a719e1ab4b5120d8f49921dd7616ad272c27efbca6924870" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:46.452017 systemd[1]: Started cri-containerd-c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f.scope - libcontainer container c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f. Apr 23 23:56:46.476759 containerd[1624]: time="2026-04-23T23:56:46.476716940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9t9jj,Uid:2ccf3b0f-34c9-4bb0-b73f-93bcffbb9089,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f\"" Apr 23 23:56:46.481606 containerd[1624]: time="2026-04-23T23:56:46.481549652Z" level=info msg="CreateContainer within sandbox \"c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 23 23:56:46.493058 containerd[1624]: time="2026-04-23T23:56:46.493028427Z" level=info msg="Container fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:46.500659 containerd[1624]: time="2026-04-23T23:56:46.500186780Z" level=info msg="CreateContainer within sandbox \"c7ab2f36ee2680dee1a858e0db47a9ecdcf9d5fead422dc79a087ed1924de94f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f\"" Apr 23 23:56:46.500728 containerd[1624]: time="2026-04-23T23:56:46.500663220Z" level=info msg="StartContainer for \"fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f\"" Apr 23 23:56:46.502924 containerd[1624]: time="2026-04-23T23:56:46.502873621Z" level=info msg="connecting to shim fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f" address="unix:///run/containerd/s/a07430b47dcf3928a719e1ab4b5120d8f49921dd7616ad272c27efbca6924870" protocol=ttrpc version=3 Apr 23 23:56:46.520021 systemd[1]: Started cri-containerd-fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f.scope - libcontainer container fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f. Apr 23 23:56:46.576210 containerd[1624]: time="2026-04-23T23:56:46.575793902Z" level=info msg="StartContainer for \"fe6d6080c22311b83678304f799576d1292066cd41e9b6820bd1e158ac215b1f\" returns successfully" Apr 23 23:56:46.590545 containerd[1624]: time="2026-04-23T23:56:46.590509238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-b8z89,Uid:1c50a3df-997c-4e82-b06b-cd945cbcc1a7,Namespace:tigera-operator,Attempt:0,}" Apr 23 23:56:46.608850 containerd[1624]: time="2026-04-23T23:56:46.608758055Z" level=info msg="connecting to shim 1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94" address="unix:///run/containerd/s/388ee0235232fa18cd35f0b24e5e4dac77cbf34055406f6c2e15c2b4e7f27ebf" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:46.631017 systemd[1]: Started cri-containerd-1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94.scope - libcontainer container 1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94. Apr 23 23:56:46.669042 containerd[1624]: time="2026-04-23T23:56:46.669012990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-b8z89,Uid:1c50a3df-997c-4e82-b06b-cd945cbcc1a7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94\"" Apr 23 23:56:46.670608 containerd[1624]: time="2026-04-23T23:56:46.670581821Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 23 23:56:47.259621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807497000.mount: Deactivated successfully. Apr 23 23:56:48.357965 kubelet[2771]: I0423 23:56:48.357654 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9t9jj" podStartSLOduration=2.357634144 podStartE2EDuration="2.357634144s" podCreationTimestamp="2026-04-23 23:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:56:47.451610266 +0000 UTC m=+7.151190319" watchObservedRunningTime="2026-04-23 23:56:48.357634144 +0000 UTC m=+8.057214147" Apr 23 23:56:48.427574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693904035.mount: Deactivated successfully. Apr 23 23:56:49.062022 containerd[1624]: time="2026-04-23T23:56:49.061967317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:49.063419 containerd[1624]: time="2026-04-23T23:56:49.063114927Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 23 23:56:49.064241 containerd[1624]: time="2026-04-23T23:56:49.064221228Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:49.066067 containerd[1624]: time="2026-04-23T23:56:49.066047199Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:49.066447 containerd[1624]: time="2026-04-23T23:56:49.066430919Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 2.395824748s" Apr 23 23:56:49.066502 containerd[1624]: time="2026-04-23T23:56:49.066492699Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 23 23:56:49.069751 containerd[1624]: time="2026-04-23T23:56:49.069716050Z" level=info msg="CreateContainer within sandbox \"1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 23 23:56:49.077994 containerd[1624]: time="2026-04-23T23:56:49.077972054Z" level=info msg="Container 992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:49.081511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406123491.mount: Deactivated successfully. Apr 23 23:56:49.085463 containerd[1624]: time="2026-04-23T23:56:49.085436917Z" level=info msg="CreateContainer within sandbox \"1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\"" Apr 23 23:56:49.085902 containerd[1624]: time="2026-04-23T23:56:49.085844947Z" level=info msg="StartContainer for \"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\"" Apr 23 23:56:49.086681 containerd[1624]: time="2026-04-23T23:56:49.086666267Z" level=info msg="connecting to shim 992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26" address="unix:///run/containerd/s/388ee0235232fa18cd35f0b24e5e4dac77cbf34055406f6c2e15c2b4e7f27ebf" protocol=ttrpc version=3 Apr 23 23:56:49.104995 systemd[1]: Started cri-containerd-992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26.scope - libcontainer container 992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26. Apr 23 23:56:49.128338 containerd[1624]: time="2026-04-23T23:56:49.128288985Z" level=info msg="StartContainer for \"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\" returns successfully" Apr 23 23:56:53.496858 systemd-timesyncd[1501]: Contacted time server 193.203.3.170:123 (2.flatcar.pool.ntp.org). Apr 23 23:56:53.497018 systemd-timesyncd[1501]: Initial clock synchronization to Thu 2026-04-23 23:56:53.602905 UTC. Apr 23 23:56:53.500158 kubelet[2771]: I0423 23:56:53.500073 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6fb8d665dd-b8z89" podStartSLOduration=5.103166827 podStartE2EDuration="7.500052905s" podCreationTimestamp="2026-04-23 23:56:46 +0000 UTC" firstStartedPulling="2026-04-23 23:56:46.670343801 +0000 UTC m=+6.369923794" lastFinishedPulling="2026-04-23 23:56:49.067229879 +0000 UTC m=+8.766809872" observedRunningTime="2026-04-23 23:56:49.461390433 +0000 UTC m=+9.160970456" watchObservedRunningTime="2026-04-23 23:56:53.500052905 +0000 UTC m=+13.199632928" Apr 23 23:56:54.074110 sudo[1831]: pam_unix(sudo:session): session closed for user root Apr 23 23:56:54.107922 sshd[1830]: Connection closed by 20.229.252.112 port 58510 Apr 23 23:56:54.108081 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Apr 23 23:56:54.112793 systemd[1]: sshd@6-135.181.100.135:22-20.229.252.112:58510.service: Deactivated successfully. Apr 23 23:56:54.116771 systemd[1]: session-7.scope: Deactivated successfully. Apr 23 23:56:54.117078 systemd[1]: session-7.scope: Consumed 4.260s CPU time, 231.3M memory peak. Apr 23 23:56:54.118946 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Apr 23 23:56:54.121117 systemd-logind[1590]: Removed session 7. Apr 23 23:56:55.994472 systemd[1]: Created slice kubepods-besteffort-pod8d772777_19e3_4c2a_af05_ced9170a2fec.slice - libcontainer container kubepods-besteffort-pod8d772777_19e3_4c2a_af05_ced9170a2fec.slice. Apr 23 23:56:56.058218 systemd[1]: Created slice kubepods-besteffort-podfd99df0b_6803_40ef_b967_b769a217ee10.slice - libcontainer container kubepods-besteffort-podfd99df0b_6803_40ef_b967_b769a217ee10.slice. Apr 23 23:56:56.097808 kubelet[2771]: I0423 23:56:56.097745 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-var-lib-calico\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.097808 kubelet[2771]: I0423 23:56:56.097794 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-cni-log-dir\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.097808 kubelet[2771]: I0423 23:56:56.097806 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lj5v\" (UniqueName: \"kubernetes.io/projected/fd99df0b-6803-40ef-b967-b769a217ee10-kube-api-access-9lj5v\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.097808 kubelet[2771]: I0423 23:56:56.097820 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-cni-net-dir\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098426 kubelet[2771]: I0423 23:56:56.097831 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-policysync\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098426 kubelet[2771]: I0423 23:56:56.097842 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-var-run-calico\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098426 kubelet[2771]: I0423 23:56:56.097854 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8d772777-19e3-4c2a-af05-ced9170a2fec-typha-certs\") pod \"calico-typha-75d4d66676-nlk25\" (UID: \"8d772777-19e3-4c2a-af05-ced9170a2fec\") " pod="calico-system/calico-typha-75d4d66676-nlk25" Apr 23 23:56:56.098426 kubelet[2771]: I0423 23:56:56.097864 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-bpffs\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098426 kubelet[2771]: I0423 23:56:56.097874 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-xtables-lock\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098517 kubelet[2771]: I0423 23:56:56.097948 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-flexvol-driver-host\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098517 kubelet[2771]: I0423 23:56:56.097959 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-lib-modules\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098517 kubelet[2771]: I0423 23:56:56.098040 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd99df0b-6803-40ef-b967-b769a217ee10-tigera-ca-bundle\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098517 kubelet[2771]: I0423 23:56:56.098056 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-cni-bin-dir\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098517 kubelet[2771]: I0423 23:56:56.098068 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-sys-fs\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098612 kubelet[2771]: I0423 23:56:56.098080 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smpj\" (UniqueName: \"kubernetes.io/projected/8d772777-19e3-4c2a-af05-ced9170a2fec-kube-api-access-2smpj\") pod \"calico-typha-75d4d66676-nlk25\" (UID: \"8d772777-19e3-4c2a-af05-ced9170a2fec\") " pod="calico-system/calico-typha-75d4d66676-nlk25" Apr 23 23:56:56.098612 kubelet[2771]: I0423 23:56:56.098090 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd99df0b-6803-40ef-b967-b769a217ee10-node-certs\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098612 kubelet[2771]: I0423 23:56:56.098102 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fd99df0b-6803-40ef-b967-b769a217ee10-nodeproc\") pod \"calico-node-k2bgz\" (UID: \"fd99df0b-6803-40ef-b967-b769a217ee10\") " pod="calico-system/calico-node-k2bgz" Apr 23 23:56:56.098612 kubelet[2771]: I0423 23:56:56.098112 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d772777-19e3-4c2a-af05-ced9170a2fec-tigera-ca-bundle\") pod \"calico-typha-75d4d66676-nlk25\" (UID: \"8d772777-19e3-4c2a-af05-ced9170a2fec\") " pod="calico-system/calico-typha-75d4d66676-nlk25" Apr 23 23:56:56.167757 kubelet[2771]: E0423 23:56:56.167485 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:56:56.199075 kubelet[2771]: I0423 23:56:56.199023 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3315e263-4733-4c99-bf02-e78b8d559ae1-registration-dir\") pod \"csi-node-driver-6zj6q\" (UID: \"3315e263-4733-4c99-bf02-e78b8d559ae1\") " pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:56:56.199222 kubelet[2771]: I0423 23:56:56.199110 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3315e263-4733-4c99-bf02-e78b8d559ae1-kubelet-dir\") pod \"csi-node-driver-6zj6q\" (UID: \"3315e263-4733-4c99-bf02-e78b8d559ae1\") " pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:56:56.199222 kubelet[2771]: I0423 23:56:56.199136 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3315e263-4733-4c99-bf02-e78b8d559ae1-varrun\") pod \"csi-node-driver-6zj6q\" (UID: \"3315e263-4733-4c99-bf02-e78b8d559ae1\") " pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:56:56.199222 kubelet[2771]: I0423 23:56:56.199183 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3315e263-4733-4c99-bf02-e78b8d559ae1-socket-dir\") pod \"csi-node-driver-6zj6q\" (UID: \"3315e263-4733-4c99-bf02-e78b8d559ae1\") " pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:56:56.199222 kubelet[2771]: I0423 23:56:56.199196 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dj8\" (UniqueName: \"kubernetes.io/projected/3315e263-4733-4c99-bf02-e78b8d559ae1-kube-api-access-x6dj8\") pod \"csi-node-driver-6zj6q\" (UID: \"3315e263-4733-4c99-bf02-e78b8d559ae1\") " pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:56:56.209892 kubelet[2771]: E0423 23:56:56.209747 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.211103 kubelet[2771]: W0423 23:56:56.211033 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.211531 kubelet[2771]: E0423 23:56:56.211514 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.218497 kubelet[2771]: E0423 23:56:56.218479 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.218575 kubelet[2771]: W0423 23:56:56.218565 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.218627 kubelet[2771]: E0423 23:56:56.218619 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.218901 kubelet[2771]: E0423 23:56:56.218891 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.218958 kubelet[2771]: W0423 23:56:56.218950 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.219001 kubelet[2771]: E0423 23:56:56.218985 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.220016 kubelet[2771]: E0423 23:56:56.219978 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.220016 kubelet[2771]: W0423 23:56:56.220013 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.220077 kubelet[2771]: E0423 23:56:56.220038 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.221339 kubelet[2771]: E0423 23:56:56.221241 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.221339 kubelet[2771]: W0423 23:56:56.221259 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.221339 kubelet[2771]: E0423 23:56:56.221273 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.225467 kubelet[2771]: E0423 23:56:56.225448 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.225866 kubelet[2771]: W0423 23:56:56.225540 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.225866 kubelet[2771]: E0423 23:56:56.225557 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.226087 kubelet[2771]: E0423 23:56:56.226077 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.226128 kubelet[2771]: W0423 23:56:56.226120 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.226164 kubelet[2771]: E0423 23:56:56.226157 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.229794 kubelet[2771]: E0423 23:56:56.229770 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.229794 kubelet[2771]: W0423 23:56:56.229790 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.229859 kubelet[2771]: E0423 23:56:56.229807 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.300330 kubelet[2771]: E0423 23:56:56.300142 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.300330 kubelet[2771]: W0423 23:56:56.300171 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.300330 kubelet[2771]: E0423 23:56:56.300194 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.301037 kubelet[2771]: E0423 23:56:56.300845 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.301037 kubelet[2771]: W0423 23:56:56.300872 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.301037 kubelet[2771]: E0423 23:56:56.300916 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.301499 kubelet[2771]: E0423 23:56:56.301382 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.301499 kubelet[2771]: W0423 23:56:56.301400 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.301499 kubelet[2771]: E0423 23:56:56.301415 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.302002 kubelet[2771]: E0423 23:56:56.301707 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.302002 kubelet[2771]: W0423 23:56:56.301721 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.302002 kubelet[2771]: E0423 23:56:56.301732 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.302076 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.305157 kubelet[2771]: W0423 23:56:56.302086 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.302128 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.302520 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.305157 kubelet[2771]: W0423 23:56:56.302530 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.302570 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.303099 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.305157 kubelet[2771]: W0423 23:56:56.303139 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.303151 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.305157 kubelet[2771]: E0423 23:56:56.303523 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.306206 kubelet[2771]: W0423 23:56:56.303534 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.303571 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.304058 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.306206 kubelet[2771]: W0423 23:56:56.304070 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.304083 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.304443 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.306206 kubelet[2771]: W0423 23:56:56.304672 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.304685 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.306206 kubelet[2771]: E0423 23:56:56.305277 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.306206 kubelet[2771]: W0423 23:56:56.305293 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.306583 kubelet[2771]: E0423 23:56:56.305305 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.306583 kubelet[2771]: E0423 23:56:56.305798 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.306583 kubelet[2771]: W0423 23:56:56.305812 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.306583 kubelet[2771]: E0423 23:56:56.305827 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.307786 containerd[1624]: time="2026-04-23T23:56:56.307294689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d4d66676-nlk25,Uid:8d772777-19e3-4c2a-af05-ced9170a2fec,Namespace:calico-system,Attempt:0,}" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.307702 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.309831 kubelet[2771]: W0423 23:56:56.307720 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.307735 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.308396 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.309831 kubelet[2771]: W0423 23:56:56.308412 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.308429 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.309129 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.309831 kubelet[2771]: W0423 23:56:56.309143 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.309831 kubelet[2771]: E0423 23:56:56.309156 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.310364 kubelet[2771]: E0423 23:56:56.310234 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.310364 kubelet[2771]: W0423 23:56:56.310252 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.310364 kubelet[2771]: E0423 23:56:56.310271 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.310667 kubelet[2771]: E0423 23:56:56.310586 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.310667 kubelet[2771]: W0423 23:56:56.310620 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.310721 kubelet[2771]: E0423 23:56:56.310669 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.311301 kubelet[2771]: E0423 23:56:56.311279 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.311301 kubelet[2771]: W0423 23:56:56.311291 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.311301 kubelet[2771]: E0423 23:56:56.311299 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.311594 kubelet[2771]: E0423 23:56:56.311576 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.311594 kubelet[2771]: W0423 23:56:56.311589 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.311744 kubelet[2771]: E0423 23:56:56.311600 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.311977 kubelet[2771]: E0423 23:56:56.311948 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.311977 kubelet[2771]: W0423 23:56:56.311961 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.311977 kubelet[2771]: E0423 23:56:56.311967 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.312236 kubelet[2771]: E0423 23:56:56.312220 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.312236 kubelet[2771]: W0423 23:56:56.312232 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.312368 kubelet[2771]: E0423 23:56:56.312256 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.312497 kubelet[2771]: E0423 23:56:56.312476 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.312497 kubelet[2771]: W0423 23:56:56.312490 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.312533 kubelet[2771]: E0423 23:56:56.312496 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.312774 kubelet[2771]: E0423 23:56:56.312757 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.312774 kubelet[2771]: W0423 23:56:56.312769 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.312899 kubelet[2771]: E0423 23:56:56.312777 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.313151 kubelet[2771]: E0423 23:56:56.313101 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.313151 kubelet[2771]: W0423 23:56:56.313114 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.313151 kubelet[2771]: E0423 23:56:56.313177 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.314064 kubelet[2771]: E0423 23:56:56.314023 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.314064 kubelet[2771]: W0423 23:56:56.314037 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.314064 kubelet[2771]: E0423 23:56:56.314044 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.318738 kubelet[2771]: E0423 23:56:56.318702 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:56.318738 kubelet[2771]: W0423 23:56:56.318712 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:56.318738 kubelet[2771]: E0423 23:56:56.318720 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:56.330743 containerd[1624]: time="2026-04-23T23:56:56.330426575Z" level=info msg="connecting to shim 9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867" address="unix:///run/containerd/s/cfb421162b19a1885fedef5fe09a69c20dd3b5b851b0f725c2049391bb82ecaf" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:56.358091 systemd[1]: Started cri-containerd-9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867.scope - libcontainer container 9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867. Apr 23 23:56:56.367796 containerd[1624]: time="2026-04-23T23:56:56.367709150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k2bgz,Uid:fd99df0b-6803-40ef-b967-b769a217ee10,Namespace:calico-system,Attempt:0,}" Apr 23 23:56:56.402612 containerd[1624]: time="2026-04-23T23:56:56.402572636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75d4d66676-nlk25,Uid:8d772777-19e3-4c2a-af05-ced9170a2fec,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867\"" Apr 23 23:56:56.404930 containerd[1624]: time="2026-04-23T23:56:56.404835020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 23 23:56:56.406720 containerd[1624]: time="2026-04-23T23:56:56.406688303Z" level=info msg="connecting to shim 8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74" address="unix:///run/containerd/s/f24e6b433e7e7605d2eaa60d4b32d703dfac644a4b87286525d29d96e271f27e" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:56:56.430013 systemd[1]: Started cri-containerd-8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74.scope - libcontainer container 8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74. Apr 23 23:56:56.459642 containerd[1624]: time="2026-04-23T23:56:56.459602610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k2bgz,Uid:fd99df0b-6803-40ef-b967-b769a217ee10,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\"" Apr 23 23:56:58.130300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230358764.mount: Deactivated successfully. Apr 23 23:56:58.391552 kubelet[2771]: E0423 23:56:58.391435 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:56:58.514595 containerd[1624]: time="2026-04-23T23:56:58.514544830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:58.515610 containerd[1624]: time="2026-04-23T23:56:58.515467674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 23 23:56:58.516470 containerd[1624]: time="2026-04-23T23:56:58.516452413Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:58.518024 containerd[1624]: time="2026-04-23T23:56:58.518008440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:56:58.518377 containerd[1624]: time="2026-04-23T23:56:58.518357727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 2.113493379s" Apr 23 23:56:58.518418 containerd[1624]: time="2026-04-23T23:56:58.518380919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 23 23:56:58.519873 containerd[1624]: time="2026-04-23T23:56:58.519385683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 23 23:56:58.533441 containerd[1624]: time="2026-04-23T23:56:58.533409133Z" level=info msg="CreateContainer within sandbox \"9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 23 23:56:58.542914 containerd[1624]: time="2026-04-23T23:56:58.540909166Z" level=info msg="Container 12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:56:58.549405 containerd[1624]: time="2026-04-23T23:56:58.549370412Z" level=info msg="CreateContainer within sandbox \"9b7a2dde8dd3a8a79366722a5c17a9af7e80a669826f3161110c0da40743f867\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9\"" Apr 23 23:56:58.550181 containerd[1624]: time="2026-04-23T23:56:58.550156762Z" level=info msg="StartContainer for \"12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9\"" Apr 23 23:56:58.551552 containerd[1624]: time="2026-04-23T23:56:58.551526223Z" level=info msg="connecting to shim 12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9" address="unix:///run/containerd/s/cfb421162b19a1885fedef5fe09a69c20dd3b5b851b0f725c2049391bb82ecaf" protocol=ttrpc version=3 Apr 23 23:56:58.576637 systemd[1]: Started cri-containerd-12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9.scope - libcontainer container 12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9. Apr 23 23:56:58.632163 containerd[1624]: time="2026-04-23T23:56:58.632133079Z" level=info msg="StartContainer for \"12170ff70ab9530ff1a0c5d06eba5b2496ffd2ced5902ec4b4f4d67658baffe9\" returns successfully" Apr 23 23:56:59.502858 kubelet[2771]: E0423 23:56:59.502732 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.502858 kubelet[2771]: W0423 23:56:59.502768 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.506202 kubelet[2771]: E0423 23:56:59.502956 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.506202 kubelet[2771]: E0423 23:56:59.504730 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.506202 kubelet[2771]: W0423 23:56:59.504789 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.506202 kubelet[2771]: E0423 23:56:59.505287 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.507200 kubelet[2771]: E0423 23:56:59.507147 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.507200 kubelet[2771]: W0423 23:56:59.507187 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.507344 kubelet[2771]: E0423 23:56:59.507218 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.507862 kubelet[2771]: E0423 23:56:59.507824 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.507862 kubelet[2771]: W0423 23:56:59.507850 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.508041 kubelet[2771]: E0423 23:56:59.507870 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.508401 kubelet[2771]: E0423 23:56:59.508372 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.508401 kubelet[2771]: W0423 23:56:59.508395 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.509364 kubelet[2771]: E0423 23:56:59.509214 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.509880 kubelet[2771]: E0423 23:56:59.509830 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.509880 kubelet[2771]: W0423 23:56:59.509859 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.509880 kubelet[2771]: E0423 23:56:59.509881 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.510629 kubelet[2771]: E0423 23:56:59.510602 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.510814 kubelet[2771]: W0423 23:56:59.510717 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.510814 kubelet[2771]: E0423 23:56:59.510752 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.511775 kubelet[2771]: E0423 23:56:59.511743 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.511775 kubelet[2771]: W0423 23:56:59.511768 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.512294 kubelet[2771]: E0423 23:56:59.511788 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.512567 kubelet[2771]: E0423 23:56:59.512530 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.512567 kubelet[2771]: W0423 23:56:59.512555 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.512663 kubelet[2771]: E0423 23:56:59.512573 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.513125 kubelet[2771]: E0423 23:56:59.513017 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.513125 kubelet[2771]: W0423 23:56:59.513088 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.513125 kubelet[2771]: E0423 23:56:59.513106 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.513572 kubelet[2771]: E0423 23:56:59.513498 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.513572 kubelet[2771]: W0423 23:56:59.513511 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.513572 kubelet[2771]: E0423 23:56:59.513525 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.514255 kubelet[2771]: E0423 23:56:59.514217 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.514325 kubelet[2771]: W0423 23:56:59.514297 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.514325 kubelet[2771]: E0423 23:56:59.514319 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.514748 kubelet[2771]: E0423 23:56:59.514712 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.514748 kubelet[2771]: W0423 23:56:59.514732 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.514748 kubelet[2771]: E0423 23:56:59.514747 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.515247 kubelet[2771]: E0423 23:56:59.515214 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.515247 kubelet[2771]: W0423 23:56:59.515234 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.515365 kubelet[2771]: E0423 23:56:59.515249 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.515657 kubelet[2771]: E0423 23:56:59.515627 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.515657 kubelet[2771]: W0423 23:56:59.515646 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.515749 kubelet[2771]: E0423 23:56:59.515660 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.526067 kubelet[2771]: E0423 23:56:59.525985 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.526067 kubelet[2771]: W0423 23:56:59.526048 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.526067 kubelet[2771]: E0423 23:56:59.526071 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.526796 kubelet[2771]: E0423 23:56:59.526458 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.526796 kubelet[2771]: W0423 23:56:59.526478 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.526796 kubelet[2771]: E0423 23:56:59.526493 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.527243 kubelet[2771]: E0423 23:56:59.527207 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.527243 kubelet[2771]: W0423 23:56:59.527233 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.527394 kubelet[2771]: E0423 23:56:59.527247 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.527696 kubelet[2771]: E0423 23:56:59.527649 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.527696 kubelet[2771]: W0423 23:56:59.527673 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.527696 kubelet[2771]: E0423 23:56:59.527688 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.528189 kubelet[2771]: E0423 23:56:59.528157 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.528189 kubelet[2771]: W0423 23:56:59.528180 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.528333 kubelet[2771]: E0423 23:56:59.528197 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.528552 kubelet[2771]: E0423 23:56:59.528522 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.528552 kubelet[2771]: W0423 23:56:59.528543 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.528645 kubelet[2771]: E0423 23:56:59.528556 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.529247 kubelet[2771]: E0423 23:56:59.529134 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.529247 kubelet[2771]: W0423 23:56:59.529153 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.529247 kubelet[2771]: E0423 23:56:59.529169 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.530401 kubelet[2771]: E0423 23:56:59.530372 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.530563 kubelet[2771]: W0423 23:56:59.530512 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.530563 kubelet[2771]: E0423 23:56:59.530553 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.531179 kubelet[2771]: E0423 23:56:59.531142 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.531179 kubelet[2771]: W0423 23:56:59.531165 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.531179 kubelet[2771]: E0423 23:56:59.531180 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.531854 kubelet[2771]: E0423 23:56:59.531800 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.531854 kubelet[2771]: W0423 23:56:59.531829 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.531854 kubelet[2771]: E0423 23:56:59.531848 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.532466 kubelet[2771]: E0423 23:56:59.532429 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.532466 kubelet[2771]: W0423 23:56:59.532454 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.532580 kubelet[2771]: E0423 23:56:59.532470 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.533003 kubelet[2771]: E0423 23:56:59.532968 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.533003 kubelet[2771]: W0423 23:56:59.532990 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.533097 kubelet[2771]: E0423 23:56:59.533005 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.533544 kubelet[2771]: E0423 23:56:59.533502 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.533602 kubelet[2771]: W0423 23:56:59.533553 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.533602 kubelet[2771]: E0423 23:56:59.533569 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.534093 kubelet[2771]: E0423 23:56:59.534057 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.534093 kubelet[2771]: W0423 23:56:59.534079 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.534196 kubelet[2771]: E0423 23:56:59.534095 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.534572 kubelet[2771]: E0423 23:56:59.534514 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.534572 kubelet[2771]: W0423 23:56:59.534533 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.534572 kubelet[2771]: E0423 23:56:59.534546 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.535763 kubelet[2771]: E0423 23:56:59.535451 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.535763 kubelet[2771]: W0423 23:56:59.535480 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.535763 kubelet[2771]: E0423 23:56:59.535509 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.536419 kubelet[2771]: E0423 23:56:59.536350 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.536419 kubelet[2771]: W0423 23:56:59.536376 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.536419 kubelet[2771]: E0423 23:56:59.536392 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:56:59.536821 kubelet[2771]: E0423 23:56:59.536796 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:56:59.536821 kubelet[2771]: W0423 23:56:59.536809 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:56:59.536821 kubelet[2771]: E0423 23:56:59.536823 2771 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:57:00.135391 containerd[1624]: time="2026-04-23T23:57:00.135302195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:00.136374 containerd[1624]: time="2026-04-23T23:57:00.136312233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 23 23:57:00.137370 containerd[1624]: time="2026-04-23T23:57:00.137050957Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:00.138530 containerd[1624]: time="2026-04-23T23:57:00.138499310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:00.139206 containerd[1624]: time="2026-04-23T23:57:00.138918209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 1.619448379s" Apr 23 23:57:00.139206 containerd[1624]: time="2026-04-23T23:57:00.138943128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 23 23:57:00.142564 containerd[1624]: time="2026-04-23T23:57:00.142539644Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 23 23:57:00.151821 containerd[1624]: time="2026-04-23T23:57:00.151185720Z" level=info msg="Container e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:00.169537 containerd[1624]: time="2026-04-23T23:57:00.169478048Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11\"" Apr 23 23:57:00.170033 containerd[1624]: time="2026-04-23T23:57:00.169983053Z" level=info msg="StartContainer for \"e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11\"" Apr 23 23:57:00.170965 containerd[1624]: time="2026-04-23T23:57:00.170944661Z" level=info msg="connecting to shim e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11" address="unix:///run/containerd/s/f24e6b433e7e7605d2eaa60d4b32d703dfac644a4b87286525d29d96e271f27e" protocol=ttrpc version=3 Apr 23 23:57:00.187997 systemd[1]: Started cri-containerd-e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11.scope - libcontainer container e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11. Apr 23 23:57:00.237714 containerd[1624]: time="2026-04-23T23:57:00.237637184Z" level=info msg="StartContainer for \"e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11\" returns successfully" Apr 23 23:57:00.248349 systemd[1]: cri-containerd-e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11.scope: Deactivated successfully. Apr 23 23:57:00.251311 containerd[1624]: time="2026-04-23T23:57:00.251279357Z" level=info msg="received container exit event container_id:\"e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11\" id:\"e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11\" pid:3405 exited_at:{seconds:1776988620 nanos:251017982}" Apr 23 23:57:00.269105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e22fb1197c4fec4056a7fcf9d1a72542981bd0b6e75aeb6c8f001593a51b9b11-rootfs.mount: Deactivated successfully. Apr 23 23:57:00.391800 kubelet[2771]: E0423 23:57:00.391378 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:00.483415 kubelet[2771]: I0423 23:57:00.483374 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:57:00.489572 containerd[1624]: time="2026-04-23T23:57:00.489511877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 23 23:57:00.505713 kubelet[2771]: I0423 23:57:00.505633 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75d4d66676-nlk25" podStartSLOduration=3.390301226 podStartE2EDuration="5.505619392s" podCreationTimestamp="2026-04-23 23:56:55 +0000 UTC" firstStartedPulling="2026-04-23 23:56:56.403760369 +0000 UTC m=+16.103340362" lastFinishedPulling="2026-04-23 23:56:58.519078535 +0000 UTC m=+18.218658528" observedRunningTime="2026-04-23 23:56:59.501691148 +0000 UTC m=+19.201271172" watchObservedRunningTime="2026-04-23 23:57:00.505619392 +0000 UTC m=+20.205199386" Apr 23 23:57:02.034954 kubelet[2771]: I0423 23:57:02.034451 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:57:02.392253 kubelet[2771]: E0423 23:57:02.391516 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:03.644085 update_engine[1597]: I20260423 23:57:03.643999 1597 update_attempter.cc:509] Updating boot flags... Apr 23 23:57:04.391780 kubelet[2771]: E0423 23:57:04.391675 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:06.392184 kubelet[2771]: E0423 23:57:06.392103 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:07.999248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95395958.mount: Deactivated successfully. Apr 23 23:57:08.030139 containerd[1624]: time="2026-04-23T23:57:08.030088899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:08.030965 containerd[1624]: time="2026-04-23T23:57:08.030950911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 23 23:57:08.031624 containerd[1624]: time="2026-04-23T23:57:08.031607557Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:08.033268 containerd[1624]: time="2026-04-23T23:57:08.033239659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:08.033604 containerd[1624]: time="2026-04-23T23:57:08.033510940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 7.54380468s" Apr 23 23:57:08.033604 containerd[1624]: time="2026-04-23T23:57:08.033534448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 23 23:57:08.037776 containerd[1624]: time="2026-04-23T23:57:08.037639670Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 23 23:57:08.048502 containerd[1624]: time="2026-04-23T23:57:08.047530404Z" level=info msg="Container 2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:08.051499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283503462.mount: Deactivated successfully. Apr 23 23:57:08.058516 containerd[1624]: time="2026-04-23T23:57:08.056933231Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0\"" Apr 23 23:57:08.058516 containerd[1624]: time="2026-04-23T23:57:08.057428754Z" level=info msg="StartContainer for \"2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0\"" Apr 23 23:57:08.058837 containerd[1624]: time="2026-04-23T23:57:08.058795598Z" level=info msg="connecting to shim 2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0" address="unix:///run/containerd/s/f24e6b433e7e7605d2eaa60d4b32d703dfac644a4b87286525d29d96e271f27e" protocol=ttrpc version=3 Apr 23 23:57:08.076986 systemd[1]: Started cri-containerd-2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0.scope - libcontainer container 2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0. Apr 23 23:57:08.135785 containerd[1624]: time="2026-04-23T23:57:08.135747602Z" level=info msg="StartContainer for \"2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0\" returns successfully" Apr 23 23:57:08.170033 systemd[1]: cri-containerd-2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0.scope: Deactivated successfully. Apr 23 23:57:08.171154 containerd[1624]: time="2026-04-23T23:57:08.171125443Z" level=info msg="received container exit event container_id:\"2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0\" id:\"2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0\" pid:3482 exited_at:{seconds:1776988628 nanos:170852499}" Apr 23 23:57:08.392836 kubelet[2771]: E0423 23:57:08.392211 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:08.520314 containerd[1624]: time="2026-04-23T23:57:08.519781842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 23 23:57:09.003157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6316e9400659302b46abc5932a9a5970187c7a738ac86e17677b29b84271b0-rootfs.mount: Deactivated successfully. Apr 23 23:57:10.394671 kubelet[2771]: E0423 23:57:10.393798 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:11.187440 containerd[1624]: time="2026-04-23T23:57:11.187394769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:11.188328 containerd[1624]: time="2026-04-23T23:57:11.188302536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 23 23:57:11.189198 containerd[1624]: time="2026-04-23T23:57:11.189172662Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:11.190676 containerd[1624]: time="2026-04-23T23:57:11.190644570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:11.191320 containerd[1624]: time="2026-04-23T23:57:11.191020712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 2.671153527s" Apr 23 23:57:11.191320 containerd[1624]: time="2026-04-23T23:57:11.191040760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 23 23:57:11.193969 containerd[1624]: time="2026-04-23T23:57:11.193938894Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 23 23:57:11.204922 containerd[1624]: time="2026-04-23T23:57:11.203731548Z" level=info msg="Container a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:11.206526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410652278.mount: Deactivated successfully. Apr 23 23:57:11.218343 containerd[1624]: time="2026-04-23T23:57:11.218310542Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83\"" Apr 23 23:57:11.219794 containerd[1624]: time="2026-04-23T23:57:11.218870267Z" level=info msg="StartContainer for \"a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83\"" Apr 23 23:57:11.220223 containerd[1624]: time="2026-04-23T23:57:11.220202733Z" level=info msg="connecting to shim a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83" address="unix:///run/containerd/s/f24e6b433e7e7605d2eaa60d4b32d703dfac644a4b87286525d29d96e271f27e" protocol=ttrpc version=3 Apr 23 23:57:11.238982 systemd[1]: Started cri-containerd-a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83.scope - libcontainer container a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83. Apr 23 23:57:11.302545 containerd[1624]: time="2026-04-23T23:57:11.302502090Z" level=info msg="StartContainer for \"a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83\" returns successfully" Apr 23 23:57:11.703152 containerd[1624]: time="2026-04-23T23:57:11.702904637Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:57:11.705817 systemd[1]: cri-containerd-a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83.scope: Deactivated successfully. Apr 23 23:57:11.706315 systemd[1]: cri-containerd-a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83.scope: Consumed 366ms CPU time, 183M memory peak, 1M read from disk, 173.7M written to disk. Apr 23 23:57:11.708379 containerd[1624]: time="2026-04-23T23:57:11.708346896Z" level=info msg="received container exit event container_id:\"a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83\" id:\"a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83\" pid:3540 exited_at:{seconds:1776988631 nanos:708146238}" Apr 23 23:57:11.738915 kubelet[2771]: I0423 23:57:11.738504 2771 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 23 23:57:11.750146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a08678cc068ae2f53a7e336ebe434ffa6d4d50c645374aa63e2ef9a69062ed83-rootfs.mount: Deactivated successfully. Apr 23 23:57:11.794969 systemd[1]: Created slice kubepods-besteffort-pod71726c76_cd2d_413c_909a_161577715a4a.slice - libcontainer container kubepods-besteffort-pod71726c76_cd2d_413c_909a_161577715a4a.slice. Apr 23 23:57:11.803441 systemd[1]: Created slice kubepods-besteffort-pode650d354_ca28_40b7_8761_4ac249d5c96d.slice - libcontainer container kubepods-besteffort-pode650d354_ca28_40b7_8761_4ac249d5c96d.slice. Apr 23 23:57:11.809713 systemd[1]: Created slice kubepods-besteffort-poda66e730c_9aaf_4f39_b236_39ac8befa8f6.slice - libcontainer container kubepods-besteffort-poda66e730c_9aaf_4f39_b236_39ac8befa8f6.slice. Apr 23 23:57:11.817133 kubelet[2771]: I0423 23:57:11.817034 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34c340b2-ffd3-4daf-b4fb-7bf28657b5c1-calico-apiserver-certs\") pod \"calico-apiserver-66bff7cfcc-6q8w2\" (UID: \"34c340b2-ffd3-4daf-b4fb-7bf28657b5c1\") " pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" Apr 23 23:57:11.817133 kubelet[2771]: I0423 23:57:11.817066 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e650d354-ca28-40b7-8761-4ac249d5c96d-tigera-ca-bundle\") pod \"calico-kube-controllers-6546f8485-tbb65\" (UID: \"e650d354-ca28-40b7-8761-4ac249d5c96d\") " pod="calico-system/calico-kube-controllers-6546f8485-tbb65" Apr 23 23:57:11.817133 kubelet[2771]: I0423 23:57:11.817077 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnv6\" (UniqueName: \"kubernetes.io/projected/e650d354-ca28-40b7-8761-4ac249d5c96d-kube-api-access-gnnv6\") pod \"calico-kube-controllers-6546f8485-tbb65\" (UID: \"e650d354-ca28-40b7-8761-4ac249d5c96d\") " pod="calico-system/calico-kube-controllers-6546f8485-tbb65" Apr 23 23:57:11.817133 kubelet[2771]: I0423 23:57:11.817090 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a661017b-2fbc-42b5-a3f9-0f24590987d7-goldmane-ca-bundle\") pod \"goldmane-6b4b7f4496-d5q98\" (UID: \"a661017b-2fbc-42b5-a3f9-0f24590987d7\") " pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:11.817133 kubelet[2771]: I0423 23:57:11.817101 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drddc\" (UniqueName: \"kubernetes.io/projected/a661017b-2fbc-42b5-a3f9-0f24590987d7-kube-api-access-drddc\") pod \"goldmane-6b4b7f4496-d5q98\" (UID: \"a661017b-2fbc-42b5-a3f9-0f24590987d7\") " pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:11.817396 kubelet[2771]: I0423 23:57:11.817114 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-nginx-config\") pod \"whisker-995b49b44-92ll8\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:11.817396 kubelet[2771]: I0423 23:57:11.817141 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71726c76-cd2d-413c-909a-161577715a4a-whisker-backend-key-pair\") pod \"whisker-995b49b44-92ll8\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:11.817396 kubelet[2771]: I0423 23:57:11.817151 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a66e730c-9aaf-4f39-b236-39ac8befa8f6-calico-apiserver-certs\") pod \"calico-apiserver-66bff7cfcc-pt9gl\" (UID: \"a66e730c-9aaf-4f39-b236-39ac8befa8f6\") " pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" Apr 23 23:57:11.817396 kubelet[2771]: I0423 23:57:11.817163 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-whisker-ca-bundle\") pod \"whisker-995b49b44-92ll8\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:11.817396 kubelet[2771]: I0423 23:57:11.817174 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k9d4\" (UniqueName: \"kubernetes.io/projected/34c340b2-ffd3-4daf-b4fb-7bf28657b5c1-kube-api-access-2k9d4\") pod \"calico-apiserver-66bff7cfcc-6q8w2\" (UID: \"34c340b2-ffd3-4daf-b4fb-7bf28657b5c1\") " pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" Apr 23 23:57:11.817521 kubelet[2771]: I0423 23:57:11.817184 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db96913-915b-48c7-b627-ed66e3472900-config-volume\") pod \"coredns-66bc5c9577-67d6p\" (UID: \"8db96913-915b-48c7-b627-ed66e3472900\") " pod="kube-system/coredns-66bc5c9577-67d6p" Apr 23 23:57:11.817521 kubelet[2771]: I0423 23:57:11.817196 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9cr7\" (UniqueName: \"kubernetes.io/projected/71726c76-cd2d-413c-909a-161577715a4a-kube-api-access-l9cr7\") pod \"whisker-995b49b44-92ll8\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:11.817521 kubelet[2771]: I0423 23:57:11.817210 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrt6\" (UniqueName: \"kubernetes.io/projected/8db96913-915b-48c7-b627-ed66e3472900-kube-api-access-jqrt6\") pod \"coredns-66bc5c9577-67d6p\" (UID: \"8db96913-915b-48c7-b627-ed66e3472900\") " pod="kube-system/coredns-66bc5c9577-67d6p" Apr 23 23:57:11.817521 kubelet[2771]: I0423 23:57:11.817224 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a661017b-2fbc-42b5-a3f9-0f24590987d7-goldmane-key-pair\") pod \"goldmane-6b4b7f4496-d5q98\" (UID: \"a661017b-2fbc-42b5-a3f9-0f24590987d7\") " pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:11.817521 kubelet[2771]: I0423 23:57:11.817235 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwv7t\" (UniqueName: \"kubernetes.io/projected/a66e730c-9aaf-4f39-b236-39ac8befa8f6-kube-api-access-nwv7t\") pod \"calico-apiserver-66bff7cfcc-pt9gl\" (UID: \"a66e730c-9aaf-4f39-b236-39ac8befa8f6\") " pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" Apr 23 23:57:11.817649 kubelet[2771]: I0423 23:57:11.817245 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpk5r\" (UniqueName: \"kubernetes.io/projected/d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e-kube-api-access-xpk5r\") pod \"coredns-66bc5c9577-ljdsw\" (UID: \"d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e\") " pod="kube-system/coredns-66bc5c9577-ljdsw" Apr 23 23:57:11.817649 kubelet[2771]: I0423 23:57:11.817273 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a661017b-2fbc-42b5-a3f9-0f24590987d7-config\") pod \"goldmane-6b4b7f4496-d5q98\" (UID: \"a661017b-2fbc-42b5-a3f9-0f24590987d7\") " pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:11.817649 kubelet[2771]: I0423 23:57:11.817283 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e-config-volume\") pod \"coredns-66bc5c9577-ljdsw\" (UID: \"d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e\") " pod="kube-system/coredns-66bc5c9577-ljdsw" Apr 23 23:57:11.818640 systemd[1]: Created slice kubepods-burstable-podd7b6dbcf_3ca9_4ad6_909c_215d1ee1581e.slice - libcontainer container kubepods-burstable-podd7b6dbcf_3ca9_4ad6_909c_215d1ee1581e.slice. Apr 23 23:57:11.824837 systemd[1]: Created slice kubepods-besteffort-poda661017b_2fbc_42b5_a3f9_0f24590987d7.slice - libcontainer container kubepods-besteffort-poda661017b_2fbc_42b5_a3f9_0f24590987d7.slice. Apr 23 23:57:11.830364 systemd[1]: Created slice kubepods-besteffort-pod34c340b2_ffd3_4daf_b4fb_7bf28657b5c1.slice - libcontainer container kubepods-besteffort-pod34c340b2_ffd3_4daf_b4fb_7bf28657b5c1.slice. Apr 23 23:57:11.836004 systemd[1]: Created slice kubepods-burstable-pod8db96913_915b_48c7_b627_ed66e3472900.slice - libcontainer container kubepods-burstable-pod8db96913_915b_48c7_b627_ed66e3472900.slice. Apr 23 23:57:12.104320 containerd[1624]: time="2026-04-23T23:57:12.104229169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995b49b44-92ll8,Uid:71726c76-cd2d-413c-909a-161577715a4a,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.110126 containerd[1624]: time="2026-04-23T23:57:12.109827927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6546f8485-tbb65,Uid:e650d354-ca28-40b7-8761-4ac249d5c96d,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.115309 containerd[1624]: time="2026-04-23T23:57:12.115189977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-pt9gl,Uid:a66e730c-9aaf-4f39-b236-39ac8befa8f6,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.128496 containerd[1624]: time="2026-04-23T23:57:12.128254294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljdsw,Uid:d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e,Namespace:kube-system,Attempt:0,}" Apr 23 23:57:12.131474 containerd[1624]: time="2026-04-23T23:57:12.131376800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-d5q98,Uid:a661017b-2fbc-42b5-a3f9-0f24590987d7,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.139613 containerd[1624]: time="2026-04-23T23:57:12.139572756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-6q8w2,Uid:34c340b2-ffd3-4daf-b4fb-7bf28657b5c1,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.141393 containerd[1624]: time="2026-04-23T23:57:12.141320134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-67d6p,Uid:8db96913-915b-48c7-b627-ed66e3472900,Namespace:kube-system,Attempt:0,}" Apr 23 23:57:12.277530 containerd[1624]: time="2026-04-23T23:57:12.277488146Z" level=error msg="Failed to destroy network for sandbox \"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.281329 systemd[1]: run-netns-cni\x2d8f3e346f\x2d5840\x2d15e0\x2d93da\x2d0f918a7d2308.mount: Deactivated successfully. Apr 23 23:57:12.286127 containerd[1624]: time="2026-04-23T23:57:12.286078300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljdsw,Uid:d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.286399 kubelet[2771]: E0423 23:57:12.286368 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.287422 kubelet[2771]: E0423 23:57:12.286526 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ljdsw" Apr 23 23:57:12.287422 kubelet[2771]: E0423 23:57:12.286545 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ljdsw" Apr 23 23:57:12.288051 kubelet[2771]: E0423 23:57:12.288014 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ljdsw_kube-system(d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ljdsw_kube-system(d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"838b2dd712a0c7d7d3a8edfadf1c32dec9995f4c8ceb440747d3d91588e999f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ljdsw" podUID="d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e" Apr 23 23:57:12.324191 containerd[1624]: time="2026-04-23T23:57:12.324147590Z" level=error msg="Failed to destroy network for sandbox \"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.327085 systemd[1]: run-netns-cni\x2dfe6fb464\x2d3172\x2d84fc\x2deff4\x2d2afbe0451698.mount: Deactivated successfully. Apr 23 23:57:12.330844 containerd[1624]: time="2026-04-23T23:57:12.330607531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6546f8485-tbb65,Uid:e650d354-ca28-40b7-8761-4ac249d5c96d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.331094 kubelet[2771]: E0423 23:57:12.331017 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.331094 kubelet[2771]: E0423 23:57:12.331077 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6546f8485-tbb65" Apr 23 23:57:12.331313 kubelet[2771]: E0423 23:57:12.331096 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6546f8485-tbb65" Apr 23 23:57:12.331313 kubelet[2771]: E0423 23:57:12.331147 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6546f8485-tbb65_calico-system(e650d354-ca28-40b7-8761-4ac249d5c96d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6546f8485-tbb65_calico-system(e650d354-ca28-40b7-8761-4ac249d5c96d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33a7a9da422e274cd023adce8f22a5efad762da4cd31b3830f000c4b28063774\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6546f8485-tbb65" podUID="e650d354-ca28-40b7-8761-4ac249d5c96d" Apr 23 23:57:12.339696 containerd[1624]: time="2026-04-23T23:57:12.337808881Z" level=error msg="Failed to destroy network for sandbox \"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.340505 systemd[1]: run-netns-cni\x2d6cbbedda\x2da5d0\x2dbe27\x2d02b4\x2d656888737f89.mount: Deactivated successfully. Apr 23 23:57:12.341275 containerd[1624]: time="2026-04-23T23:57:12.341174841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-pt9gl,Uid:a66e730c-9aaf-4f39-b236-39ac8befa8f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.341906 kubelet[2771]: E0423 23:57:12.341853 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.341958 kubelet[2771]: E0423 23:57:12.341920 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" Apr 23 23:57:12.341958 kubelet[2771]: E0423 23:57:12.341935 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" Apr 23 23:57:12.342004 kubelet[2771]: E0423 23:57:12.341973 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66bff7cfcc-pt9gl_calico-system(a66e730c-9aaf-4f39-b236-39ac8befa8f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66bff7cfcc-pt9gl_calico-system(a66e730c-9aaf-4f39-b236-39ac8befa8f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4172f8f51c5596d4eb566a6b1ced7d0c894560869aa8d9d4803ce44873cf229e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" podUID="a66e730c-9aaf-4f39-b236-39ac8befa8f6" Apr 23 23:57:12.351759 containerd[1624]: time="2026-04-23T23:57:12.351462523Z" level=error msg="Failed to destroy network for sandbox \"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.354043 systemd[1]: run-netns-cni\x2df8c10b33\x2df647\x2dbba2\x2d8e04\x2de2467ea4bc1e.mount: Deactivated successfully. Apr 23 23:57:12.356863 containerd[1624]: time="2026-04-23T23:57:12.356827786Z" level=error msg="Failed to destroy network for sandbox \"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.357097 containerd[1624]: time="2026-04-23T23:57:12.356877347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-67d6p,Uid:8db96913-915b-48c7-b627-ed66e3472900,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.357268 kubelet[2771]: E0423 23:57:12.357173 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.357268 kubelet[2771]: E0423 23:57:12.357223 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-67d6p" Apr 23 23:57:12.357268 kubelet[2771]: E0423 23:57:12.357238 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-67d6p" Apr 23 23:57:12.357364 kubelet[2771]: E0423 23:57:12.357277 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-67d6p_kube-system(8db96913-915b-48c7-b627-ed66e3472900)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-67d6p_kube-system(8db96913-915b-48c7-b627-ed66e3472900)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c06d09c86a2f3fa4d7ec8df176e615f31919d0f33b28352a2fe759ed8eb5880\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-67d6p" podUID="8db96913-915b-48c7-b627-ed66e3472900" Apr 23 23:57:12.358521 containerd[1624]: time="2026-04-23T23:57:12.358441083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-d5q98,Uid:a661017b-2fbc-42b5-a3f9-0f24590987d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.358726 kubelet[2771]: E0423 23:57:12.358698 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.358954 kubelet[2771]: E0423 23:57:12.358733 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:12.358954 kubelet[2771]: E0423 23:57:12.358930 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-d5q98" Apr 23 23:57:12.358954 kubelet[2771]: E0423 23:57:12.359071 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-6b4b7f4496-d5q98_calico-system(a661017b-2fbc-42b5-a3f9-0f24590987d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-6b4b7f4496-d5q98_calico-system(a661017b-2fbc-42b5-a3f9-0f24590987d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc96b29c7e04ac09303a2bc5964d410294ac4c8315c298ef0f18e1df67b0fa59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-d5q98" podUID="a661017b-2fbc-42b5-a3f9-0f24590987d7" Apr 23 23:57:12.360367 containerd[1624]: time="2026-04-23T23:57:12.360340005Z" level=error msg="Failed to destroy network for sandbox \"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.361631 containerd[1624]: time="2026-04-23T23:57:12.361581599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-6q8w2,Uid:34c340b2-ffd3-4daf-b4fb-7bf28657b5c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.361740 kubelet[2771]: E0423 23:57:12.361718 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.361780 kubelet[2771]: E0423 23:57:12.361764 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" Apr 23 23:57:12.361799 kubelet[2771]: E0423 23:57:12.361778 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" Apr 23 23:57:12.361845 kubelet[2771]: E0423 23:57:12.361829 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66bff7cfcc-6q8w2_calico-system(34c340b2-ffd3-4daf-b4fb-7bf28657b5c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66bff7cfcc-6q8w2_calico-system(34c340b2-ffd3-4daf-b4fb-7bf28657b5c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbcf395e87fa4f4457c3add45b9905d16492945ef320a3d7a363b7fd4dbbd460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" podUID="34c340b2-ffd3-4daf-b4fb-7bf28657b5c1" Apr 23 23:57:12.362726 containerd[1624]: time="2026-04-23T23:57:12.362709347Z" level=error msg="Failed to destroy network for sandbox \"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.364323 containerd[1624]: time="2026-04-23T23:57:12.364303400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995b49b44-92ll8,Uid:71726c76-cd2d-413c-909a-161577715a4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.364607 kubelet[2771]: E0423 23:57:12.364585 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.364644 kubelet[2771]: E0423 23:57:12.364615 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:12.364644 kubelet[2771]: E0423 23:57:12.364628 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-995b49b44-92ll8" Apr 23 23:57:12.364692 kubelet[2771]: E0423 23:57:12.364669 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-995b49b44-92ll8_calico-system(71726c76-cd2d-413c-909a-161577715a4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-995b49b44-92ll8_calico-system(71726c76-cd2d-413c-909a-161577715a4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d06523c40a4d0363517c6844e467bfb513053b2fb8ec6b70f3ae703893b206f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-995b49b44-92ll8" podUID="71726c76-cd2d-413c-909a-161577715a4a" Apr 23 23:57:12.398380 systemd[1]: Created slice kubepods-besteffort-pod3315e263_4733_4c99_bf02_e78b8d559ae1.slice - libcontainer container kubepods-besteffort-pod3315e263_4733_4c99_bf02_e78b8d559ae1.slice. Apr 23 23:57:12.401952 containerd[1624]: time="2026-04-23T23:57:12.401918038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zj6q,Uid:3315e263-4733-4c99-bf02-e78b8d559ae1,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:12.450227 containerd[1624]: time="2026-04-23T23:57:12.450171304Z" level=error msg="Failed to destroy network for sandbox \"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.451472 containerd[1624]: time="2026-04-23T23:57:12.451422982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zj6q,Uid:3315e263-4733-4c99-bf02-e78b8d559ae1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.451762 kubelet[2771]: E0423 23:57:12.451722 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:57:12.451839 kubelet[2771]: E0423 23:57:12.451776 2771 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:57:12.451839 kubelet[2771]: E0423 23:57:12.451801 2771 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zj6q" Apr 23 23:57:12.451918 kubelet[2771]: E0423 23:57:12.451848 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zj6q_calico-system(3315e263-4733-4c99-bf02-e78b8d559ae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zj6q_calico-system(3315e263-4733-4c99-bf02-e78b8d559ae1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d0e240540d931d88c1f3ba325a157000a3fbea04e4e310abfea08e2e990b3ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zj6q" podUID="3315e263-4733-4c99-bf02-e78b8d559ae1" Apr 23 23:57:12.561500 containerd[1624]: time="2026-04-23T23:57:12.561458184Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 23 23:57:12.568822 containerd[1624]: time="2026-04-23T23:57:12.568779400Z" level=info msg="Container e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:12.575035 containerd[1624]: time="2026-04-23T23:57:12.574999880Z" level=info msg="CreateContainer within sandbox \"8c002395c8ea1f503b0d25529f456d63818a8a7052272749f56dbf5201144d74\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86\"" Apr 23 23:57:12.575492 containerd[1624]: time="2026-04-23T23:57:12.575451858Z" level=info msg="StartContainer for \"e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86\"" Apr 23 23:57:12.576903 containerd[1624]: time="2026-04-23T23:57:12.576863459Z" level=info msg="connecting to shim e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86" address="unix:///run/containerd/s/f24e6b433e7e7605d2eaa60d4b32d703dfac644a4b87286525d29d96e271f27e" protocol=ttrpc version=3 Apr 23 23:57:12.595015 systemd[1]: Started cri-containerd-e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86.scope - libcontainer container e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86. Apr 23 23:57:12.656275 containerd[1624]: time="2026-04-23T23:57:12.656168768Z" level=info msg="StartContainer for \"e743aa37b9418511f60db7e05e7c45b3b74515566deb72876b207083b60c2d86\" returns successfully" Apr 23 23:57:12.825956 kubelet[2771]: I0423 23:57:12.825916 2771 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71726c76-cd2d-413c-909a-161577715a4a-whisker-backend-key-pair\") pod \"71726c76-cd2d-413c-909a-161577715a4a\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " Apr 23 23:57:12.825956 kubelet[2771]: I0423 23:57:12.825966 2771 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9cr7\" (UniqueName: \"kubernetes.io/projected/71726c76-cd2d-413c-909a-161577715a4a-kube-api-access-l9cr7\") pod \"71726c76-cd2d-413c-909a-161577715a4a\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " Apr 23 23:57:12.826710 kubelet[2771]: I0423 23:57:12.825985 2771 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-nginx-config\") pod \"71726c76-cd2d-413c-909a-161577715a4a\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " Apr 23 23:57:12.826710 kubelet[2771]: I0423 23:57:12.825997 2771 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-whisker-ca-bundle\") pod \"71726c76-cd2d-413c-909a-161577715a4a\" (UID: \"71726c76-cd2d-413c-909a-161577715a4a\") " Apr 23 23:57:12.826710 kubelet[2771]: I0423 23:57:12.826420 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "71726c76-cd2d-413c-909a-161577715a4a" (UID: "71726c76-cd2d-413c-909a-161577715a4a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:57:12.827461 kubelet[2771]: I0423 23:57:12.827433 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "71726c76-cd2d-413c-909a-161577715a4a" (UID: "71726c76-cd2d-413c-909a-161577715a4a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:57:12.832844 kubelet[2771]: I0423 23:57:12.832814 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71726c76-cd2d-413c-909a-161577715a4a-kube-api-access-l9cr7" (OuterVolumeSpecName: "kube-api-access-l9cr7") pod "71726c76-cd2d-413c-909a-161577715a4a" (UID: "71726c76-cd2d-413c-909a-161577715a4a"). InnerVolumeSpecName "kube-api-access-l9cr7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:57:12.833039 kubelet[2771]: I0423 23:57:12.833004 2771 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71726c76-cd2d-413c-909a-161577715a4a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "71726c76-cd2d-413c-909a-161577715a4a" (UID: "71726c76-cd2d-413c-909a-161577715a4a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 23:57:12.926800 kubelet[2771]: I0423 23:57:12.926609 2771 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9cr7\" (UniqueName: \"kubernetes.io/projected/71726c76-cd2d-413c-909a-161577715a4a-kube-api-access-l9cr7\") on node \"ci-4459-2-4-n-19674e2966\" DevicePath \"\"" Apr 23 23:57:12.926800 kubelet[2771]: I0423 23:57:12.926653 2771 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-nginx-config\") on node \"ci-4459-2-4-n-19674e2966\" DevicePath \"\"" Apr 23 23:57:12.926800 kubelet[2771]: I0423 23:57:12.926665 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71726c76-cd2d-413c-909a-161577715a4a-whisker-ca-bundle\") on node \"ci-4459-2-4-n-19674e2966\" DevicePath \"\"" Apr 23 23:57:12.926800 kubelet[2771]: I0423 23:57:12.926676 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71726c76-cd2d-413c-909a-161577715a4a-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-19674e2966\" DevicePath \"\"" Apr 23 23:57:13.209940 systemd[1]: run-netns-cni\x2d6a421039\x2d7c0b\x2d8609\x2dd8a6\x2d67306181e9f2.mount: Deactivated successfully. Apr 23 23:57:13.210128 systemd[1]: run-netns-cni\x2ddf6d7aec\x2d8cec\x2d7ac0\x2d8c4b\x2d0eb9f6918067.mount: Deactivated successfully. Apr 23 23:57:13.210252 systemd[1]: run-netns-cni\x2d5c2b3d44\x2da164\x2d3c4c\x2d309a\x2d99058691324f.mount: Deactivated successfully. Apr 23 23:57:13.210390 systemd[1]: var-lib-kubelet-pods-71726c76\x2dcd2d\x2d413c\x2d909a\x2d161577715a4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9cr7.mount: Deactivated successfully. Apr 23 23:57:13.210528 systemd[1]: var-lib-kubelet-pods-71726c76\x2dcd2d\x2d413c\x2d909a\x2d161577715a4a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 23 23:57:13.558423 systemd[1]: Removed slice kubepods-besteffort-pod71726c76_cd2d_413c_909a_161577715a4a.slice - libcontainer container kubepods-besteffort-pod71726c76_cd2d_413c_909a_161577715a4a.slice. Apr 23 23:57:13.575449 kubelet[2771]: I0423 23:57:13.574833 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k2bgz" podStartSLOduration=2.844506043 podStartE2EDuration="17.574811299s" podCreationTimestamp="2026-04-23 23:56:56 +0000 UTC" firstStartedPulling="2026-04-23 23:56:56.461208838 +0000 UTC m=+16.160788832" lastFinishedPulling="2026-04-23 23:57:11.191514105 +0000 UTC m=+30.891094088" observedRunningTime="2026-04-23 23:57:13.572368538 +0000 UTC m=+33.271948561" watchObservedRunningTime="2026-04-23 23:57:13.574811299 +0000 UTC m=+33.274391313" Apr 23 23:57:13.639319 systemd[1]: Created slice kubepods-besteffort-podc111fce3_e457_41c5_b0f1_f06c5cd4b093.slice - libcontainer container kubepods-besteffort-podc111fce3_e457_41c5_b0f1_f06c5cd4b093.slice. Apr 23 23:57:13.732740 kubelet[2771]: I0423 23:57:13.732606 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c111fce3-e457-41c5-b0f1-f06c5cd4b093-whisker-ca-bundle\") pod \"whisker-6986c987f8-6d97h\" (UID: \"c111fce3-e457-41c5-b0f1-f06c5cd4b093\") " pod="calico-system/whisker-6986c987f8-6d97h" Apr 23 23:57:13.732740 kubelet[2771]: I0423 23:57:13.732726 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c111fce3-e457-41c5-b0f1-f06c5cd4b093-nginx-config\") pod \"whisker-6986c987f8-6d97h\" (UID: \"c111fce3-e457-41c5-b0f1-f06c5cd4b093\") " pod="calico-system/whisker-6986c987f8-6d97h" Apr 23 23:57:13.732740 kubelet[2771]: I0423 23:57:13.732754 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbn94\" (UniqueName: \"kubernetes.io/projected/c111fce3-e457-41c5-b0f1-f06c5cd4b093-kube-api-access-tbn94\") pod \"whisker-6986c987f8-6d97h\" (UID: \"c111fce3-e457-41c5-b0f1-f06c5cd4b093\") " pod="calico-system/whisker-6986c987f8-6d97h" Apr 23 23:57:13.733125 kubelet[2771]: I0423 23:57:13.732780 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c111fce3-e457-41c5-b0f1-f06c5cd4b093-whisker-backend-key-pair\") pod \"whisker-6986c987f8-6d97h\" (UID: \"c111fce3-e457-41c5-b0f1-f06c5cd4b093\") " pod="calico-system/whisker-6986c987f8-6d97h" Apr 23 23:57:13.948872 containerd[1624]: time="2026-04-23T23:57:13.948736074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6986c987f8-6d97h,Uid:c111fce3-e457-41c5-b0f1-f06c5cd4b093,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:14.129168 systemd-networkd[1490]: cali8d534b3330b: Link UP Apr 23 23:57:14.129854 systemd-networkd[1490]: cali8d534b3330b: Gained carrier Apr 23 23:57:14.163760 containerd[1624]: 2026-04-23 23:57:14.004 [ERROR][3878] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:57:14.163760 containerd[1624]: 2026-04-23 23:57:14.020 [INFO][3878] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0 whisker-6986c987f8- calico-system c111fce3-e457-41c5-b0f1-f06c5cd4b093 906 0 2026-04-23 23:57:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6986c987f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 whisker-6986c987f8-6d97h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8d534b3330b [] [] }} ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-" Apr 23 23:57:14.163760 containerd[1624]: 2026-04-23 23:57:14.020 [INFO][3878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.163760 containerd[1624]: 2026-04-23 23:57:14.070 [INFO][3934] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" HandleID="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Workload="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.075 [INFO][3934] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" HandleID="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Workload="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000301500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"whisker-6986c987f8-6d97h", "timestamp":"2026-04-23 23:57:14.070356386 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000220f20)} Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.075 [INFO][3934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.075 [INFO][3934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.075 [INFO][3934] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.078 [INFO][3934] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.082 [INFO][3934] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.087 [INFO][3934] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.090 [INFO][3934] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164016 containerd[1624]: 2026-04-23 23:57:14.095 [INFO][3934] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.095 [INFO][3934] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.097 [INFO][3934] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.100 [INFO][3934] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.104 [INFO][3934] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.104 [INFO][3934] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.104 [INFO][3934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:14.164210 containerd[1624]: 2026-04-23 23:57:14.104 [INFO][3934] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" HandleID="k8s-pod-network.ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Workload="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.164327 containerd[1624]: 2026-04-23 23:57:14.112 [INFO][3878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0", GenerateName:"whisker-6986c987f8-", Namespace:"calico-system", SelfLink:"", UID:"c111fce3-e457-41c5-b0f1-f06c5cd4b093", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6986c987f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"whisker-6986c987f8-6d97h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d534b3330b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:14.164327 containerd[1624]: 2026-04-23 23:57:14.112 [INFO][3878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.1/32] ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.164424 containerd[1624]: 2026-04-23 23:57:14.112 [INFO][3878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d534b3330b ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.164424 containerd[1624]: 2026-04-23 23:57:14.133 [INFO][3878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.164458 containerd[1624]: 2026-04-23 23:57:14.135 [INFO][3878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0", GenerateName:"whisker-6986c987f8-", Namespace:"calico-system", SelfLink:"", UID:"c111fce3-e457-41c5-b0f1-f06c5cd4b093", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6986c987f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af", Pod:"whisker-6986c987f8-6d97h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d534b3330b", MAC:"62:cd:37:a4:50:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:14.164498 containerd[1624]: 2026-04-23 23:57:14.150 [INFO][3878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" Namespace="calico-system" Pod="whisker-6986c987f8-6d97h" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-whisker--6986c987f8--6d97h-eth0" Apr 23 23:57:14.231210 containerd[1624]: time="2026-04-23T23:57:14.231111998Z" level=info msg="connecting to shim ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af" address="unix:///run/containerd/s/b192277052370e40528ac4b48961e2c5998cb5113406f788cfd310fedf080d45" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:14.277364 systemd[1]: Started cri-containerd-ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af.scope - libcontainer container ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af. Apr 23 23:57:14.342070 containerd[1624]: time="2026-04-23T23:57:14.342012850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6986c987f8-6d97h,Uid:c111fce3-e457-41c5-b0f1-f06c5cd4b093,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af\"" Apr 23 23:57:14.343985 containerd[1624]: time="2026-04-23T23:57:14.343957566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 23 23:57:14.393808 kubelet[2771]: I0423 23:57:14.393753 2771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71726c76-cd2d-413c-909a-161577715a4a" path="/var/lib/kubelet/pods/71726c76-cd2d-413c-909a-161577715a4a/volumes" Apr 23 23:57:14.550378 kubelet[2771]: I0423 23:57:14.550156 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:57:14.840452 systemd-networkd[1490]: vxlan.calico: Link UP Apr 23 23:57:14.840461 systemd-networkd[1490]: vxlan.calico: Gained carrier Apr 23 23:57:15.956097 systemd-networkd[1490]: cali8d534b3330b: Gained IPv6LL Apr 23 23:57:16.048698 containerd[1624]: time="2026-04-23T23:57:16.048638392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:16.049699 containerd[1624]: time="2026-04-23T23:57:16.049590139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 23 23:57:16.050446 containerd[1624]: time="2026-04-23T23:57:16.050416026Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:16.052024 containerd[1624]: time="2026-04-23T23:57:16.052002104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:16.052450 containerd[1624]: time="2026-04-23T23:57:16.052434571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 1.708334205s" Apr 23 23:57:16.052507 containerd[1624]: time="2026-04-23T23:57:16.052498056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 23 23:57:16.060083 containerd[1624]: time="2026-04-23T23:57:16.060052999Z" level=info msg="CreateContainer within sandbox \"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 23 23:57:16.068937 containerd[1624]: time="2026-04-23T23:57:16.068902032Z" level=info msg="Container 5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:16.074872 containerd[1624]: time="2026-04-23T23:57:16.074841185Z" level=info msg="CreateContainer within sandbox \"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534\"" Apr 23 23:57:16.075401 containerd[1624]: time="2026-04-23T23:57:16.075387525Z" level=info msg="StartContainer for \"5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534\"" Apr 23 23:57:16.076348 containerd[1624]: time="2026-04-23T23:57:16.076304316Z" level=info msg="connecting to shim 5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534" address="unix:///run/containerd/s/b192277052370e40528ac4b48961e2c5998cb5113406f788cfd310fedf080d45" protocol=ttrpc version=3 Apr 23 23:57:16.098022 systemd[1]: Started cri-containerd-5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534.scope - libcontainer container 5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534. Apr 23 23:57:16.141028 containerd[1624]: time="2026-04-23T23:57:16.140675103Z" level=info msg="StartContainer for \"5e5caa6ebca950b3d0b0bafbcf19e75af63fdd0267752e162cbda50806b79534\" returns successfully" Apr 23 23:57:16.143555 containerd[1624]: time="2026-04-23T23:57:16.143526610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 23 23:57:16.340225 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL Apr 23 23:57:17.920351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095511048.mount: Deactivated successfully. Apr 23 23:57:17.943637 containerd[1624]: time="2026-04-23T23:57:17.943583042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:17.944811 containerd[1624]: time="2026-04-23T23:57:17.944788422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 23 23:57:17.946233 containerd[1624]: time="2026-04-23T23:57:17.946204243Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:17.949430 containerd[1624]: time="2026-04-23T23:57:17.948923385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:17.949430 containerd[1624]: time="2026-04-23T23:57:17.949340445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 1.805654793s" Apr 23 23:57:17.949430 containerd[1624]: time="2026-04-23T23:57:17.949360568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 23 23:57:17.953368 containerd[1624]: time="2026-04-23T23:57:17.953318110Z" level=info msg="CreateContainer within sandbox \"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 23 23:57:17.961906 containerd[1624]: time="2026-04-23T23:57:17.959638304Z" level=info msg="Container 59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:17.971718 containerd[1624]: time="2026-04-23T23:57:17.971679498Z" level=info msg="CreateContainer within sandbox \"ae51e2872a1f65bb2f60252e0fddba42094e612f0ae8e233d30aa5c3c48638af\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154\"" Apr 23 23:57:17.972318 containerd[1624]: time="2026-04-23T23:57:17.972281512Z" level=info msg="StartContainer for \"59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154\"" Apr 23 23:57:17.973262 containerd[1624]: time="2026-04-23T23:57:17.973237658Z" level=info msg="connecting to shim 59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154" address="unix:///run/containerd/s/b192277052370e40528ac4b48961e2c5998cb5113406f788cfd310fedf080d45" protocol=ttrpc version=3 Apr 23 23:57:17.993058 systemd[1]: Started cri-containerd-59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154.scope - libcontainer container 59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154. Apr 23 23:57:18.042790 containerd[1624]: time="2026-04-23T23:57:18.042758070Z" level=info msg="StartContainer for \"59c177694481b059d355e846ce0e46e65ad672d406808601092040eeac68a154\" returns successfully" Apr 23 23:57:23.395570 containerd[1624]: time="2026-04-23T23:57:23.395486126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-pt9gl,Uid:a66e730c-9aaf-4f39-b236-39ac8befa8f6,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:23.397610 containerd[1624]: time="2026-04-23T23:57:23.397467380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6546f8485-tbb65,Uid:e650d354-ca28-40b7-8761-4ac249d5c96d,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:23.530052 systemd-networkd[1490]: cali14638752bd1: Link UP Apr 23 23:57:23.531098 systemd-networkd[1490]: cali14638752bd1: Gained carrier Apr 23 23:57:23.541517 kubelet[2771]: I0423 23:57:23.541416 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6986c987f8-6d97h" podStartSLOduration=6.9348959279999995 podStartE2EDuration="10.541401169s" podCreationTimestamp="2026-04-23 23:57:13 +0000 UTC" firstStartedPulling="2026-04-23 23:57:14.34348836 +0000 UTC m=+34.043068353" lastFinishedPulling="2026-04-23 23:57:17.949993601 +0000 UTC m=+37.649573594" observedRunningTime="2026-04-23 23:57:18.590732726 +0000 UTC m=+38.290312759" watchObservedRunningTime="2026-04-23 23:57:23.541401169 +0000 UTC m=+43.240981152" Apr 23 23:57:23.548641 containerd[1624]: 2026-04-23 23:57:23.464 [INFO][4303] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0 calico-apiserver-66bff7cfcc- calico-system a66e730c-9aaf-4f39-b236-39ac8befa8f6 853 0 2026-04-23 23:56:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66bff7cfcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 calico-apiserver-66bff7cfcc-pt9gl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali14638752bd1 [] [] }} ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-" Apr 23 23:57:23.548641 containerd[1624]: 2026-04-23 23:57:23.464 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.548641 containerd[1624]: 2026-04-23 23:57:23.490 [INFO][4325] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" HandleID="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.497 [INFO][4325] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" HandleID="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285a70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"calico-apiserver-66bff7cfcc-pt9gl", "timestamp":"2026-04-23 23:57:23.490639524 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000220dc0)} Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.497 [INFO][4325] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.497 [INFO][4325] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.497 [INFO][4325] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.499 [INFO][4325] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.506 [INFO][4325] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.510 [INFO][4325] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.512 [INFO][4325] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549130 containerd[1624]: 2026-04-23 23:57:23.514 [INFO][4325] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.514 [INFO][4325] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.515 [INFO][4325] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.520 [INFO][4325] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.523 [INFO][4325] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.523 [INFO][4325] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.523 [INFO][4325] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:23.549308 containerd[1624]: 2026-04-23 23:57:23.523 [INFO][4325] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" HandleID="k8s-pod-network.4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.549411 containerd[1624]: 2026-04-23 23:57:23.527 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0", GenerateName:"calico-apiserver-66bff7cfcc-", Namespace:"calico-system", SelfLink:"", UID:"a66e730c-9aaf-4f39-b236-39ac8befa8f6", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bff7cfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"calico-apiserver-66bff7cfcc-pt9gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali14638752bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:23.549448 containerd[1624]: 2026-04-23 23:57:23.527 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.2/32] ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.549448 containerd[1624]: 2026-04-23 23:57:23.527 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14638752bd1 ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.549448 containerd[1624]: 2026-04-23 23:57:23.531 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.549507 containerd[1624]: 2026-04-23 23:57:23.531 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0", GenerateName:"calico-apiserver-66bff7cfcc-", Namespace:"calico-system", SelfLink:"", UID:"a66e730c-9aaf-4f39-b236-39ac8befa8f6", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bff7cfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a", Pod:"calico-apiserver-66bff7cfcc-pt9gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali14638752bd1", MAC:"92:75:b8:48:b7:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:23.549544 containerd[1624]: 2026-04-23 23:57:23.543 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-pt9gl" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--pt9gl-eth0" Apr 23 23:57:23.574688 containerd[1624]: time="2026-04-23T23:57:23.574291405Z" level=info msg="connecting to shim 4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a" address="unix:///run/containerd/s/3d2adaaefde9299d1d8321129a9e6bef24111a74f235aa53f380b12dd2de9e45" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:23.596028 systemd[1]: Started cri-containerd-4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a.scope - libcontainer container 4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a. Apr 23 23:57:23.645611 systemd-networkd[1490]: caliea9eedf69c3: Link UP Apr 23 23:57:23.646373 systemd-networkd[1490]: caliea9eedf69c3: Gained carrier Apr 23 23:57:23.663267 containerd[1624]: time="2026-04-23T23:57:23.663104310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-pt9gl,Uid:a66e730c-9aaf-4f39-b236-39ac8befa8f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a\"" Apr 23 23:57:23.666657 containerd[1624]: time="2026-04-23T23:57:23.666625085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:57:23.672672 containerd[1624]: 2026-04-23 23:57:23.464 [INFO][4301] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0 calico-kube-controllers-6546f8485- calico-system e650d354-ca28-40b7-8761-4ac249d5c96d 852 0 2026-04-23 23:56:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6546f8485 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 calico-kube-controllers-6546f8485-tbb65 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea9eedf69c3 [] [] }} ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-" Apr 23 23:57:23.672672 containerd[1624]: 2026-04-23 23:57:23.464 [INFO][4301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.672672 containerd[1624]: 2026-04-23 23:57:23.498 [INFO][4327] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" HandleID="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.505 [INFO][4327] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" HandleID="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"calico-kube-controllers-6546f8485-tbb65", "timestamp":"2026-04-23 23:57:23.498694486 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000250dc0)} Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.506 [INFO][4327] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.523 [INFO][4327] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.524 [INFO][4327] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.601 [INFO][4327] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.607 [INFO][4327] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.613 [INFO][4327] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.615 [INFO][4327] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.672917 containerd[1624]: 2026-04-23 23:57:23.617 [INFO][4327] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.617 [INFO][4327] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.619 [INFO][4327] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075 Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.624 [INFO][4327] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.633 [INFO][4327] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.635 [INFO][4327] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.635 [INFO][4327] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:23.673079 containerd[1624]: 2026-04-23 23:57:23.635 [INFO][4327] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" HandleID="k8s-pod-network.336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.673221 containerd[1624]: 2026-04-23 23:57:23.641 [INFO][4301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0", GenerateName:"calico-kube-controllers-6546f8485-", Namespace:"calico-system", SelfLink:"", UID:"e650d354-ca28-40b7-8761-4ac249d5c96d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6546f8485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"calico-kube-controllers-6546f8485-tbb65", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea9eedf69c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:23.673264 containerd[1624]: 2026-04-23 23:57:23.641 [INFO][4301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.3/32] ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.673264 containerd[1624]: 2026-04-23 23:57:23.641 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea9eedf69c3 ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.673264 containerd[1624]: 2026-04-23 23:57:23.650 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.673319 containerd[1624]: 2026-04-23 23:57:23.653 [INFO][4301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0", GenerateName:"calico-kube-controllers-6546f8485-", Namespace:"calico-system", SelfLink:"", UID:"e650d354-ca28-40b7-8761-4ac249d5c96d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6546f8485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075", Pod:"calico-kube-controllers-6546f8485-tbb65", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea9eedf69c3", MAC:"ea:1c:df:ea:72:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:23.673368 containerd[1624]: 2026-04-23 23:57:23.669 [INFO][4301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" Namespace="calico-system" Pod="calico-kube-controllers-6546f8485-tbb65" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--kube--controllers--6546f8485--tbb65-eth0" Apr 23 23:57:23.698563 containerd[1624]: time="2026-04-23T23:57:23.698510541Z" level=info msg="connecting to shim 336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075" address="unix:///run/containerd/s/d3e1ff9d0e6cc9c6b89760cf9a1fdc7ca77e0e7fa0d5f3da17ca53f6e2aa8151" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:23.720086 systemd[1]: Started cri-containerd-336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075.scope - libcontainer container 336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075. Apr 23 23:57:23.765507 containerd[1624]: time="2026-04-23T23:57:23.765451961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6546f8485-tbb65,Uid:e650d354-ca28-40b7-8761-4ac249d5c96d,Namespace:calico-system,Attempt:0,} returns sandbox id \"336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075\"" Apr 23 23:57:24.980161 systemd-networkd[1490]: cali14638752bd1: Gained IPv6LL Apr 23 23:57:25.394699 containerd[1624]: time="2026-04-23T23:57:25.394658646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-6q8w2,Uid:34c340b2-ffd3-4daf-b4fb-7bf28657b5c1,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:25.516540 systemd-networkd[1490]: cali08040351108: Link UP Apr 23 23:57:25.517489 systemd-networkd[1490]: cali08040351108: Gained carrier Apr 23 23:57:25.542425 containerd[1624]: 2026-04-23 23:57:25.432 [INFO][4460] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0 calico-apiserver-66bff7cfcc- calico-system 34c340b2-ffd3-4daf-b4fb-7bf28657b5c1 855 0 2026-04-23 23:56:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66bff7cfcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 calico-apiserver-66bff7cfcc-6q8w2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali08040351108 [] [] }} ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-" Apr 23 23:57:25.542425 containerd[1624]: 2026-04-23 23:57:25.433 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.542425 containerd[1624]: 2026-04-23 23:57:25.464 [INFO][4472] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" HandleID="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.471 [INFO][4472] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" HandleID="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"calico-apiserver-66bff7cfcc-6q8w2", "timestamp":"2026-04-23 23:57:25.46442137 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030e420)} Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.471 [INFO][4472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.471 [INFO][4472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.471 [INFO][4472] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.474 [INFO][4472] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.479 [INFO][4472] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.486 [INFO][4472] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.490 [INFO][4472] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542616 containerd[1624]: 2026-04-23 23:57:25.493 [INFO][4472] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.493 [INFO][4472] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.495 [INFO][4472] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046 Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.500 [INFO][4472] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.508 [INFO][4472] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.508 [INFO][4472] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.508 [INFO][4472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:25.542766 containerd[1624]: 2026-04-23 23:57:25.508 [INFO][4472] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" HandleID="k8s-pod-network.af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Workload="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.544049 containerd[1624]: 2026-04-23 23:57:25.512 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0", GenerateName:"calico-apiserver-66bff7cfcc-", Namespace:"calico-system", SelfLink:"", UID:"34c340b2-ffd3-4daf-b4fb-7bf28657b5c1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bff7cfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"calico-apiserver-66bff7cfcc-6q8w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali08040351108", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:25.544103 containerd[1624]: 2026-04-23 23:57:25.513 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.4/32] ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.544103 containerd[1624]: 2026-04-23 23:57:25.513 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08040351108 ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.544103 containerd[1624]: 2026-04-23 23:57:25.519 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.544152 containerd[1624]: 2026-04-23 23:57:25.519 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0", GenerateName:"calico-apiserver-66bff7cfcc-", Namespace:"calico-system", SelfLink:"", UID:"34c340b2-ffd3-4daf-b4fb-7bf28657b5c1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bff7cfcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046", Pod:"calico-apiserver-66bff7cfcc-6q8w2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali08040351108", MAC:"12:5a:6b:ef:cf:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:25.544193 containerd[1624]: 2026-04-23 23:57:25.532 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" Namespace="calico-system" Pod="calico-apiserver-66bff7cfcc-6q8w2" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-calico--apiserver--66bff7cfcc--6q8w2-eth0" Apr 23 23:57:25.580332 containerd[1624]: time="2026-04-23T23:57:25.580278056Z" level=info msg="connecting to shim af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046" address="unix:///run/containerd/s/6082b291f309ceebb04e69b7507e24b4d0876052ea1f8bb9fcb765705ac287f9" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:25.617284 systemd[1]: Started cri-containerd-af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046.scope - libcontainer container af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046. Apr 23 23:57:25.620056 systemd-networkd[1490]: caliea9eedf69c3: Gained IPv6LL Apr 23 23:57:25.682782 containerd[1624]: time="2026-04-23T23:57:25.682749103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bff7cfcc-6q8w2,Uid:34c340b2-ffd3-4daf-b4fb-7bf28657b5c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046\"" Apr 23 23:57:25.964436 containerd[1624]: time="2026-04-23T23:57:25.964240981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:25.965346 containerd[1624]: time="2026-04-23T23:57:25.965196433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 23 23:57:25.966370 containerd[1624]: time="2026-04-23T23:57:25.966345439Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:25.968049 containerd[1624]: time="2026-04-23T23:57:25.968020489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:25.968763 containerd[1624]: time="2026-04-23T23:57:25.968413610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 2.301601003s" Apr 23 23:57:25.968763 containerd[1624]: time="2026-04-23T23:57:25.968435615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 23 23:57:25.969368 containerd[1624]: time="2026-04-23T23:57:25.969344547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 23 23:57:25.973469 containerd[1624]: time="2026-04-23T23:57:25.973444869Z" level=info msg="CreateContainer within sandbox \"4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:57:25.987063 containerd[1624]: time="2026-04-23T23:57:25.985643945Z" level=info msg="Container 00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:25.987438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346860299.mount: Deactivated successfully. Apr 23 23:57:25.997018 containerd[1624]: time="2026-04-23T23:57:25.996420780Z" level=info msg="CreateContainer within sandbox \"4f7bb3fe2e5dbf4904bbf5d032ea21c7d8c17f611158d7fb886c5cb0d68cd19a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8\"" Apr 23 23:57:26.005780 containerd[1624]: time="2026-04-23T23:57:25.998573081Z" level=info msg="StartContainer for \"00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8\"" Apr 23 23:57:26.005780 containerd[1624]: time="2026-04-23T23:57:25.999416906Z" level=info msg="connecting to shim 00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8" address="unix:///run/containerd/s/3d2adaaefde9299d1d8321129a9e6bef24111a74f235aa53f380b12dd2de9e45" protocol=ttrpc version=3 Apr 23 23:57:26.025994 systemd[1]: Started cri-containerd-00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8.scope - libcontainer container 00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8. Apr 23 23:57:26.075828 containerd[1624]: time="2026-04-23T23:57:26.075777751Z" level=info msg="StartContainer for \"00941751917f76bd6f7628f52569c659ceb6318c47ce5f65e24380518e7f44e8\" returns successfully" Apr 23 23:57:26.396431 containerd[1624]: time="2026-04-23T23:57:26.396307269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-d5q98,Uid:a661017b-2fbc-42b5-a3f9-0f24590987d7,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:26.397697 containerd[1624]: time="2026-04-23T23:57:26.397665793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-67d6p,Uid:8db96913-915b-48c7-b627-ed66e3472900,Namespace:kube-system,Attempt:0,}" Apr 23 23:57:26.541837 systemd-networkd[1490]: calif0816314522: Link UP Apr 23 23:57:26.543557 systemd-networkd[1490]: calif0816314522: Gained carrier Apr 23 23:57:26.571743 containerd[1624]: 2026-04-23 23:57:26.450 [INFO][4571] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0 goldmane-6b4b7f4496- calico-system a661017b-2fbc-42b5-a3f9-0f24590987d7 854 0 2026-04-23 23:56:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:6b4b7f4496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 goldmane-6b4b7f4496-d5q98 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0816314522 [] [] }} ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-" Apr 23 23:57:26.571743 containerd[1624]: 2026-04-23 23:57:26.451 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.571743 containerd[1624]: 2026-04-23 23:57:26.491 [INFO][4599] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" HandleID="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Workload="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.497 [INFO][4599] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" HandleID="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Workload="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"goldmane-6b4b7f4496-d5q98", "timestamp":"2026-04-23 23:57:26.491088638 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00024edc0)} Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.497 [INFO][4599] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.497 [INFO][4599] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.497 [INFO][4599] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.501 [INFO][4599] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.507 [INFO][4599] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.513 [INFO][4599] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.515 [INFO][4599] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.571955 containerd[1624]: 2026-04-23 23:57:26.519 [INFO][4599] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.519 [INFO][4599] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.521 [INFO][4599] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1 Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.526 [INFO][4599] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.533 [INFO][4599] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.5/26] block=192.168.32.0/26 handle="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.533 [INFO][4599] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.5/26] handle="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.533 [INFO][4599] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:26.572123 containerd[1624]: 2026-04-23 23:57:26.534 [INFO][4599] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.5/26] IPv6=[] ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" HandleID="k8s-pod-network.0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Workload="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.572244 containerd[1624]: 2026-04-23 23:57:26.536 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"a661017b-2fbc-42b5-a3f9-0f24590987d7", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"goldmane-6b4b7f4496-d5q98", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0816314522", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:26.572282 containerd[1624]: 2026-04-23 23:57:26.536 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.5/32] ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.572282 containerd[1624]: 2026-04-23 23:57:26.536 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0816314522 ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.572282 containerd[1624]: 2026-04-23 23:57:26.545 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.572330 containerd[1624]: 2026-04-23 23:57:26.546 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"a661017b-2fbc-42b5-a3f9-0f24590987d7", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1", Pod:"goldmane-6b4b7f4496-d5q98", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0816314522", MAC:"da:16:35:3a:69:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:26.572368 containerd[1624]: 2026-04-23 23:57:26.568 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" Namespace="calico-system" Pod="goldmane-6b4b7f4496-d5q98" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-goldmane--6b4b7f4496--d5q98-eth0" Apr 23 23:57:26.603858 containerd[1624]: time="2026-04-23T23:57:26.603789495Z" level=info msg="connecting to shim 0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1" address="unix:///run/containerd/s/fc4a28dc90372deaa99534877321a188986d2e6c960e5ff5879457e6bbb1f368" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:26.645114 systemd-networkd[1490]: cali08040351108: Gained IPv6LL Apr 23 23:57:26.653135 systemd[1]: Started cri-containerd-0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1.scope - libcontainer container 0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1. Apr 23 23:57:26.704157 systemd-networkd[1490]: calif6bd980115c: Link UP Apr 23 23:57:26.705419 systemd-networkd[1490]: calif6bd980115c: Gained carrier Apr 23 23:57:26.721808 kubelet[2771]: I0423 23:57:26.721749 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-66bff7cfcc-pt9gl" podStartSLOduration=29.417404892 podStartE2EDuration="31.721734783s" podCreationTimestamp="2026-04-23 23:56:55 +0000 UTC" firstStartedPulling="2026-04-23 23:57:23.664840896 +0000 UTC m=+43.364420889" lastFinishedPulling="2026-04-23 23:57:25.969170797 +0000 UTC m=+45.668750780" observedRunningTime="2026-04-23 23:57:26.616823178 +0000 UTC m=+46.316403171" watchObservedRunningTime="2026-04-23 23:57:26.721734783 +0000 UTC m=+46.421314766" Apr 23 23:57:26.729169 containerd[1624]: 2026-04-23 23:57:26.461 [INFO][4574] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0 coredns-66bc5c9577- kube-system 8db96913-915b-48c7-b627-ed66e3472900 851 0 2026-04-23 23:56:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 coredns-66bc5c9577-67d6p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6bd980115c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-" Apr 23 23:57:26.729169 containerd[1624]: 2026-04-23 23:57:26.461 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729169 containerd[1624]: 2026-04-23 23:57:26.509 [INFO][4604] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" HandleID="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.515 [INFO][4604] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" HandleID="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f33a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"coredns-66bc5c9577-67d6p", "timestamp":"2026-04-23 23:57:26.509018877 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000262f20)} Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.516 [INFO][4604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.534 [INFO][4604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.534 [INFO][4604] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.602 [INFO][4604] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.623 [INFO][4604] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.642 [INFO][4604] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.650 [INFO][4604] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729567 containerd[1624]: 2026-04-23 23:57:26.659 [INFO][4604] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.660 [INFO][4604] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.674 [INFO][4604] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.687 [INFO][4604] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.697 [INFO][4604] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.6/26] block=192.168.32.0/26 handle="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.697 [INFO][4604] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.6/26] handle="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.697 [INFO][4604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:26.729725 containerd[1624]: 2026-04-23 23:57:26.697 [INFO][4604] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.6/26] IPv6=[] ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" HandleID="k8s-pod-network.2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729839 containerd[1624]: 2026-04-23 23:57:26.700 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8db96913-915b-48c7-b627-ed66e3472900", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"coredns-66bc5c9577-67d6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6bd980115c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:26.729839 containerd[1624]: 2026-04-23 23:57:26.700 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.6/32] ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729839 containerd[1624]: 2026-04-23 23:57:26.701 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6bd980115c ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729839 containerd[1624]: 2026-04-23 23:57:26.708 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.729839 containerd[1624]: 2026-04-23 23:57:26.709 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8db96913-915b-48c7-b627-ed66e3472900", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b", Pod:"coredns-66bc5c9577-67d6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6bd980115c", MAC:"32:08:ba:32:63:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:26.730047 containerd[1624]: 2026-04-23 23:57:26.722 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" Namespace="kube-system" Pod="coredns-66bc5c9577-67d6p" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--67d6p-eth0" Apr 23 23:57:26.766137 containerd[1624]: time="2026-04-23T23:57:26.766034549Z" level=info msg="connecting to shim 2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b" address="unix:///run/containerd/s/57edf25f8dcf1b53cf7895e231742e6996f473949afc0cfbfe34faf57cda274c" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:26.792078 containerd[1624]: time="2026-04-23T23:57:26.792015062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-d5q98,Uid:a661017b-2fbc-42b5-a3f9-0f24590987d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1\"" Apr 23 23:57:26.818112 systemd[1]: Started cri-containerd-2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b.scope - libcontainer container 2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b. Apr 23 23:57:26.868305 containerd[1624]: time="2026-04-23T23:57:26.868206284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-67d6p,Uid:8db96913-915b-48c7-b627-ed66e3472900,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b\"" Apr 23 23:57:26.875545 containerd[1624]: time="2026-04-23T23:57:26.875490172Z" level=info msg="CreateContainer within sandbox \"2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:57:26.887260 containerd[1624]: time="2026-04-23T23:57:26.887076528Z" level=info msg="Container 2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:26.892060 containerd[1624]: time="2026-04-23T23:57:26.892016310Z" level=info msg="CreateContainer within sandbox \"2d386259f4756a938a12180ada0592715f01ce2045abb380cff08100440a586b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269\"" Apr 23 23:57:26.892588 containerd[1624]: time="2026-04-23T23:57:26.892565720Z" level=info msg="StartContainer for \"2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269\"" Apr 23 23:57:26.893497 containerd[1624]: time="2026-04-23T23:57:26.893432113Z" level=info msg="connecting to shim 2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269" address="unix:///run/containerd/s/57edf25f8dcf1b53cf7895e231742e6996f473949afc0cfbfe34faf57cda274c" protocol=ttrpc version=3 Apr 23 23:57:26.912047 systemd[1]: Started cri-containerd-2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269.scope - libcontainer container 2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269. Apr 23 23:57:26.944433 containerd[1624]: time="2026-04-23T23:57:26.944395696Z" level=info msg="StartContainer for \"2c2f427cf4bb8ab035179803b7ae843fb09ecbfeaf5205147837c7b714dee269\" returns successfully" Apr 23 23:57:27.394591 containerd[1624]: time="2026-04-23T23:57:27.394410274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljdsw,Uid:d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e,Namespace:kube-system,Attempt:0,}" Apr 23 23:57:27.396743 containerd[1624]: time="2026-04-23T23:57:27.396508150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zj6q,Uid:3315e263-4733-4c99-bf02-e78b8d559ae1,Namespace:calico-system,Attempt:0,}" Apr 23 23:57:27.550966 systemd-networkd[1490]: calia9a34224edb: Link UP Apr 23 23:57:27.551231 systemd-networkd[1490]: calia9a34224edb: Gained carrier Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.450 [INFO][4770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0 csi-node-driver- calico-system 3315e263-4733-4c99-bf02-e78b8d559ae1 713 0 2026-04-23 23:56:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:95f96f7df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 csi-node-driver-6zj6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia9a34224edb [] [] }} ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.452 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.503 [INFO][4786] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" HandleID="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Workload="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.511 [INFO][4786] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" HandleID="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Workload="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"csi-node-driver-6zj6q", "timestamp":"2026-04-23 23:57:27.503084838 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000434dc0)} Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.511 [INFO][4786] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.511 [INFO][4786] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.511 [INFO][4786] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.514 [INFO][4786] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.522 [INFO][4786] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.526 [INFO][4786] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.529 [INFO][4786] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.532 [INFO][4786] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.532 [INFO][4786] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.535 [INFO][4786] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.539 [INFO][4786] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4786] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.7/26] block=192.168.32.0/26 handle="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4786] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.7/26] handle="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4786] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:27.565830 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4786] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.7/26] IPv6=[] ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" HandleID="k8s-pod-network.e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Workload="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.548 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3315e263-4733-4c99-bf02-e78b8d559ae1", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"csi-node-driver-6zj6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a34224edb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.548 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.7/32] ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.548 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9a34224edb ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.550 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.551 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3315e263-4733-4c99-bf02-e78b8d559ae1", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f", Pod:"csi-node-driver-6zj6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia9a34224edb", MAC:"f2:d9:d2:d4:8d:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:27.566719 containerd[1624]: 2026-04-23 23:57:27.561 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" Namespace="calico-system" Pod="csi-node-driver-6zj6q" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-csi--node--driver--6zj6q-eth0" Apr 23 23:57:27.604153 containerd[1624]: time="2026-04-23T23:57:27.604103032Z" level=info msg="connecting to shim e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f" address="unix:///run/containerd/s/bfe419a3a87751cbd4115013cdd32d08d15f0be3ab365b35025a9ce5688df7cd" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:27.606863 kubelet[2771]: I0423 23:57:27.606268 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:57:27.631608 kubelet[2771]: I0423 23:57:27.631531 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-67d6p" podStartSLOduration=41.631512571 podStartE2EDuration="41.631512571s" podCreationTimestamp="2026-04-23 23:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:57:27.630070547 +0000 UTC m=+47.329650530" watchObservedRunningTime="2026-04-23 23:57:27.631512571 +0000 UTC m=+47.331092554" Apr 23 23:57:27.646121 systemd[1]: Started cri-containerd-e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f.scope - libcontainer container e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f. Apr 23 23:57:27.689441 systemd-networkd[1490]: cali237a710739d: Link UP Apr 23 23:57:27.692271 systemd-networkd[1490]: cali237a710739d: Gained carrier Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.466 [INFO][4762] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0 coredns-66bc5c9577- kube-system d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e 850 0 2026-04-23 23:56:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-19674e2966 coredns-66bc5c9577-ljdsw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali237a710739d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.466 [INFO][4762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.515 [INFO][4793] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" HandleID="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.521 [INFO][4793] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" HandleID="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-19674e2966", "pod":"coredns-66bc5c9577-ljdsw", "timestamp":"2026-04-23 23:57:27.515358095 +0000 UTC"}, Hostname:"ci-4459-2-4-n-19674e2966", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d91e0)} Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.521 [INFO][4793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.546 [INFO][4793] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-19674e2966' Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.619 [INFO][4793] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.639 [INFO][4793] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.659 [INFO][4793] ipam/ipam.go 526: Trying affinity for 192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.663 [INFO][4793] ipam/ipam.go 160: Attempting to load block cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.667 [INFO][4793] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.667 [INFO][4793] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.671 [INFO][4793] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48 Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.675 [INFO][4793] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.683 [INFO][4793] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.32.8/26] block=192.168.32.0/26 handle="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.684 [INFO][4793] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.32.8/26] handle="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" host="ci-4459-2-4-n-19674e2966" Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.684 [INFO][4793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:57:27.718288 containerd[1624]: 2026-04-23 23:57:27.684 [INFO][4793] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.32.8/26] IPv6=[] ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" HandleID="k8s-pod-network.9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Workload="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718800 containerd[1624]: 2026-04-23 23:57:27.686 [INFO][4762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"", Pod:"coredns-66bc5c9577-ljdsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali237a710739d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:27.718800 containerd[1624]: 2026-04-23 23:57:27.686 [INFO][4762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.8/32] ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718800 containerd[1624]: 2026-04-23 23:57:27.686 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali237a710739d ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718800 containerd[1624]: 2026-04-23 23:57:27.689 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.718800 containerd[1624]: 2026-04-23 23:57:27.689 [INFO][4762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-19674e2966", ContainerID:"9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48", Pod:"coredns-66bc5c9577-ljdsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali237a710739d", MAC:"ce:99:d9:ee:35:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:57:27.718963 containerd[1624]: 2026-04-23 23:57:27.708 [INFO][4762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" Namespace="kube-system" Pod="coredns-66bc5c9577-ljdsw" WorkloadEndpoint="ci--4459--2--4--n--19674e2966-k8s-coredns--66bc5c9577--ljdsw-eth0" Apr 23 23:57:27.734055 containerd[1624]: time="2026-04-23T23:57:27.734008385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zj6q,Uid:3315e263-4733-4c99-bf02-e78b8d559ae1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f\"" Apr 23 23:57:27.753907 containerd[1624]: time="2026-04-23T23:57:27.753407339Z" level=info msg="connecting to shim 9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48" address="unix:///run/containerd/s/ea28dc0fe6989b5818a808a6193e7526ca373502f9f77591eb86cc849fac2c12" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:57:27.782042 systemd[1]: Started cri-containerd-9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48.scope - libcontainer container 9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48. Apr 23 23:57:27.824668 containerd[1624]: time="2026-04-23T23:57:27.824594538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ljdsw,Uid:d7b6dbcf-3ca9-4ad6-909c-215d1ee1581e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48\"" Apr 23 23:57:27.829174 containerd[1624]: time="2026-04-23T23:57:27.829134923Z" level=info msg="CreateContainer within sandbox \"9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:57:27.837390 containerd[1624]: time="2026-04-23T23:57:27.837353181Z" level=info msg="Container 73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:27.842076 containerd[1624]: time="2026-04-23T23:57:27.842047920Z" level=info msg="CreateContainer within sandbox \"9e9eb177923365f89483f81c89fe4f927b87b1f303cfd0285c0e17bf2dd10b48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41\"" Apr 23 23:57:27.842519 containerd[1624]: time="2026-04-23T23:57:27.842482828Z" level=info msg="StartContainer for \"73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41\"" Apr 23 23:57:27.843224 containerd[1624]: time="2026-04-23T23:57:27.843183356Z" level=info msg="connecting to shim 73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41" address="unix:///run/containerd/s/ea28dc0fe6989b5818a808a6193e7526ca373502f9f77591eb86cc849fac2c12" protocol=ttrpc version=3 Apr 23 23:57:27.861024 systemd[1]: Started cri-containerd-73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41.scope - libcontainer container 73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41. Apr 23 23:57:27.890682 containerd[1624]: time="2026-04-23T23:57:27.890621366Z" level=info msg="StartContainer for \"73c60641f8925cdfcadef53da66491e48377a8cc5569968a749d046b6858cb41\" returns successfully" Apr 23 23:57:28.308512 systemd-networkd[1490]: calif0816314522: Gained IPv6LL Apr 23 23:57:28.309715 systemd-networkd[1490]: calif6bd980115c: Gained IPv6LL Apr 23 23:57:28.645059 kubelet[2771]: I0423 23:57:28.644507 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ljdsw" podStartSLOduration=42.644468507 podStartE2EDuration="42.644468507s" podCreationTimestamp="2026-04-23 23:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:57:28.643826372 +0000 UTC m=+48.343406425" watchObservedRunningTime="2026-04-23 23:57:28.644468507 +0000 UTC m=+48.344048530" Apr 23 23:57:29.396034 systemd-networkd[1490]: calia9a34224edb: Gained IPv6LL Apr 23 23:57:29.524138 systemd-networkd[1490]: cali237a710739d: Gained IPv6LL Apr 23 23:57:30.103333 containerd[1624]: time="2026-04-23T23:57:30.103279106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:30.104509 containerd[1624]: time="2026-04-23T23:57:30.104423016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 23 23:57:30.105598 containerd[1624]: time="2026-04-23T23:57:30.105577968Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:30.107453 containerd[1624]: time="2026-04-23T23:57:30.107430761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:30.108896 containerd[1624]: time="2026-04-23T23:57:30.108462876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 4.139092063s" Apr 23 23:57:30.108983 containerd[1624]: time="2026-04-23T23:57:30.108958142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 23 23:57:30.109814 containerd[1624]: time="2026-04-23T23:57:30.109776709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:57:30.124118 containerd[1624]: time="2026-04-23T23:57:30.124079847Z" level=info msg="CreateContainer within sandbox \"336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 23 23:57:30.133147 containerd[1624]: time="2026-04-23T23:57:30.133110913Z" level=info msg="Container 932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:30.135679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970458154.mount: Deactivated successfully. Apr 23 23:57:30.141134 containerd[1624]: time="2026-04-23T23:57:30.141104612Z" level=info msg="CreateContainer within sandbox \"336b2d23399cb2be9bd356952e8caa379e4aeec93fbcc7a79a410eb2e6052075\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae\"" Apr 23 23:57:30.141498 containerd[1624]: time="2026-04-23T23:57:30.141484462Z" level=info msg="StartContainer for \"932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae\"" Apr 23 23:57:30.142362 containerd[1624]: time="2026-04-23T23:57:30.142278956Z" level=info msg="connecting to shim 932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae" address="unix:///run/containerd/s/d3e1ff9d0e6cc9c6b89760cf9a1fdc7ca77e0e7fa0d5f3da17ca53f6e2aa8151" protocol=ttrpc version=3 Apr 23 23:57:30.159006 systemd[1]: Started cri-containerd-932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae.scope - libcontainer container 932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae. Apr 23 23:57:30.208379 containerd[1624]: time="2026-04-23T23:57:30.208302205Z" level=info msg="StartContainer for \"932824121f859e6a0f99a56b4bd31b1f2cad12a0220ff1e7998a2c99868cc6ae\" returns successfully" Apr 23 23:57:30.606898 containerd[1624]: time="2026-04-23T23:57:30.606251116Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:30.611672 containerd[1624]: time="2026-04-23T23:57:30.611591507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 23 23:57:30.617261 containerd[1624]: time="2026-04-23T23:57:30.617200633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 507.400511ms" Apr 23 23:57:30.617261 containerd[1624]: time="2026-04-23T23:57:30.617263581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 23 23:57:30.618455 containerd[1624]: time="2026-04-23T23:57:30.618391430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 23 23:57:30.621904 containerd[1624]: time="2026-04-23T23:57:30.621717426Z" level=info msg="CreateContainer within sandbox \"af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:57:30.630790 containerd[1624]: time="2026-04-23T23:57:30.630721088Z" level=info msg="Container 86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:30.641453 containerd[1624]: time="2026-04-23T23:57:30.641348403Z" level=info msg="CreateContainer within sandbox \"af0ff449747c3ce6a77b7830bde87a14170a2189c0e48c96d281d6b9d65cd046\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391\"" Apr 23 23:57:30.642808 containerd[1624]: time="2026-04-23T23:57:30.642746377Z" level=info msg="StartContainer for \"86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391\"" Apr 23 23:57:30.647242 containerd[1624]: time="2026-04-23T23:57:30.647186230Z" level=info msg="connecting to shim 86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391" address="unix:///run/containerd/s/6082b291f309ceebb04e69b7507e24b4d0876052ea1f8bb9fcb765705ac287f9" protocol=ttrpc version=3 Apr 23 23:57:30.658159 kubelet[2771]: I0423 23:57:30.656737 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6546f8485-tbb65" podStartSLOduration=28.314035107 podStartE2EDuration="34.656723123s" podCreationTimestamp="2026-04-23 23:56:56 +0000 UTC" firstStartedPulling="2026-04-23 23:57:23.766845351 +0000 UTC m=+43.466425334" lastFinishedPulling="2026-04-23 23:57:30.109533357 +0000 UTC m=+49.809113350" observedRunningTime="2026-04-23 23:57:30.652257836 +0000 UTC m=+50.351837829" watchObservedRunningTime="2026-04-23 23:57:30.656723123 +0000 UTC m=+50.356303106" Apr 23 23:57:30.676069 systemd[1]: Started cri-containerd-86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391.scope - libcontainer container 86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391. Apr 23 23:57:30.732512 containerd[1624]: time="2026-04-23T23:57:30.732412200Z" level=info msg="StartContainer for \"86223771f18b0fb8baae1e1b8a44cda77f5cbeacd29c665a3559fa462ed2a391\" returns successfully" Apr 23 23:57:31.697223 kubelet[2771]: I0423 23:57:31.697143 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-66bff7cfcc-6q8w2" podStartSLOduration=31.763076679 podStartE2EDuration="36.697122734s" podCreationTimestamp="2026-04-23 23:56:55 +0000 UTC" firstStartedPulling="2026-04-23 23:57:25.684145448 +0000 UTC m=+45.383725431" lastFinishedPulling="2026-04-23 23:57:30.618191483 +0000 UTC m=+50.317771486" observedRunningTime="2026-04-23 23:57:31.695635008 +0000 UTC m=+51.395215041" watchObservedRunningTime="2026-04-23 23:57:31.697122734 +0000 UTC m=+51.396702757" Apr 23 23:57:32.574152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460822686.mount: Deactivated successfully. Apr 23 23:57:32.919290 containerd[1624]: time="2026-04-23T23:57:32.919163419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:32.920644 containerd[1624]: time="2026-04-23T23:57:32.920477443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 23 23:57:32.921605 containerd[1624]: time="2026-04-23T23:57:32.921574472Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:32.923539 containerd[1624]: time="2026-04-23T23:57:32.923508740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:32.924007 containerd[1624]: time="2026-04-23T23:57:32.923977865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 2.305543159s" Apr 23 23:57:32.924092 containerd[1624]: time="2026-04-23T23:57:32.924082727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 23 23:57:32.925600 containerd[1624]: time="2026-04-23T23:57:32.925463189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 23 23:57:32.928863 containerd[1624]: time="2026-04-23T23:57:32.928461213Z" level=info msg="CreateContainer within sandbox \"0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 23 23:57:32.939097 containerd[1624]: time="2026-04-23T23:57:32.939043237Z" level=info msg="Container a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:32.952114 containerd[1624]: time="2026-04-23T23:57:32.952058788Z" level=info msg="CreateContainer within sandbox \"0c55d4b25c0261600206792ea050ebbcec73aeca793365bfb85acbbee6caf7f1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b\"" Apr 23 23:57:32.952758 containerd[1624]: time="2026-04-23T23:57:32.952671371Z" level=info msg="StartContainer for \"a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b\"" Apr 23 23:57:32.953472 containerd[1624]: time="2026-04-23T23:57:32.953444402Z" level=info msg="connecting to shim a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b" address="unix:///run/containerd/s/fc4a28dc90372deaa99534877321a188986d2e6c960e5ff5879457e6bbb1f368" protocol=ttrpc version=3 Apr 23 23:57:32.981060 systemd[1]: Started cri-containerd-a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b.scope - libcontainer container a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b. Apr 23 23:57:33.053804 containerd[1624]: time="2026-04-23T23:57:33.053765628Z" level=info msg="StartContainer for \"a1246e33cab4f887ceccd621832b958b503350715b273d5e2c895ebf0c48c27b\" returns successfully" Apr 23 23:57:33.691317 kubelet[2771]: I0423 23:57:33.690580 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-6b4b7f4496-d5q98" podStartSLOduration=32.55966889 podStartE2EDuration="38.690559972s" podCreationTimestamp="2026-04-23 23:56:55 +0000 UTC" firstStartedPulling="2026-04-23 23:57:26.794222476 +0000 UTC m=+46.493802459" lastFinishedPulling="2026-04-23 23:57:32.925113548 +0000 UTC m=+52.624693541" observedRunningTime="2026-04-23 23:57:33.690429408 +0000 UTC m=+53.390009431" watchObservedRunningTime="2026-04-23 23:57:33.690559972 +0000 UTC m=+53.390139995" Apr 23 23:57:34.579316 containerd[1624]: time="2026-04-23T23:57:34.579267682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:34.580242 containerd[1624]: time="2026-04-23T23:57:34.580098979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 23 23:57:34.580806 containerd[1624]: time="2026-04-23T23:57:34.580787403Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:34.582350 containerd[1624]: time="2026-04-23T23:57:34.582329875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:34.582760 containerd[1624]: time="2026-04-23T23:57:34.582745310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 1.657249867s" Apr 23 23:57:34.582811 containerd[1624]: time="2026-04-23T23:57:34.582802506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 23 23:57:34.586232 containerd[1624]: time="2026-04-23T23:57:34.586202915Z" level=info msg="CreateContainer within sandbox \"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 23 23:57:34.595900 containerd[1624]: time="2026-04-23T23:57:34.594288839Z" level=info msg="Container d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:34.602234 containerd[1624]: time="2026-04-23T23:57:34.602204656Z" level=info msg="CreateContainer within sandbox \"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab\"" Apr 23 23:57:34.602695 containerd[1624]: time="2026-04-23T23:57:34.602587406Z" level=info msg="StartContainer for \"d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab\"" Apr 23 23:57:34.604340 containerd[1624]: time="2026-04-23T23:57:34.604320850Z" level=info msg="connecting to shim d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab" address="unix:///run/containerd/s/bfe419a3a87751cbd4115013cdd32d08d15f0be3ab365b35025a9ce5688df7cd" protocol=ttrpc version=3 Apr 23 23:57:34.625023 systemd[1]: Started cri-containerd-d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab.scope - libcontainer container d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab. Apr 23 23:57:34.700813 containerd[1624]: time="2026-04-23T23:57:34.700746159Z" level=info msg="StartContainer for \"d7956e998290b5daf8ff4287d4c7a481c5b59ab231f24a4a920820a663448cab\" returns successfully" Apr 23 23:57:34.702469 containerd[1624]: time="2026-04-23T23:57:34.702314994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 23 23:57:36.759184 containerd[1624]: time="2026-04-23T23:57:36.759130519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:36.759808 containerd[1624]: time="2026-04-23T23:57:36.759701093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 23 23:57:36.760469 containerd[1624]: time="2026-04-23T23:57:36.760432923Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:36.762112 containerd[1624]: time="2026-04-23T23:57:36.762089411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:57:36.762596 containerd[1624]: time="2026-04-23T23:57:36.762578597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 2.060243921s" Apr 23 23:57:36.762648 containerd[1624]: time="2026-04-23T23:57:36.762639613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 23 23:57:36.766327 containerd[1624]: time="2026-04-23T23:57:36.766304902Z" level=info msg="CreateContainer within sandbox \"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 23 23:57:36.774508 containerd[1624]: time="2026-04-23T23:57:36.773540100Z" level=info msg="Container 58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:57:36.783251 containerd[1624]: time="2026-04-23T23:57:36.783225192Z" level=info msg="CreateContainer within sandbox \"e5b03a8a94c17c8acd1b3cc1f297e7d864deb018542b9425fd4ebb322ed0025f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26\"" Apr 23 23:57:36.784909 containerd[1624]: time="2026-04-23T23:57:36.783857922Z" level=info msg="StartContainer for \"58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26\"" Apr 23 23:57:36.785549 containerd[1624]: time="2026-04-23T23:57:36.785508279Z" level=info msg="connecting to shim 58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26" address="unix:///run/containerd/s/bfe419a3a87751cbd4115013cdd32d08d15f0be3ab365b35025a9ce5688df7cd" protocol=ttrpc version=3 Apr 23 23:57:36.801988 systemd[1]: Started cri-containerd-58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26.scope - libcontainer container 58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26. Apr 23 23:57:36.865725 containerd[1624]: time="2026-04-23T23:57:36.865677445Z" level=info msg="StartContainer for \"58e30d905acd8cab136959cafefd18ee1e03f519bbbb116c3f6115e28be8ac26\" returns successfully" Apr 23 23:57:37.478816 kubelet[2771]: I0423 23:57:37.478762 2771 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 23 23:57:37.479830 kubelet[2771]: I0423 23:57:37.479795 2771 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 23 23:57:37.704297 kubelet[2771]: I0423 23:57:37.704209 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6zj6q" podStartSLOduration=32.676364902 podStartE2EDuration="41.70415515s" podCreationTimestamp="2026-04-23 23:56:56 +0000 UTC" firstStartedPulling="2026-04-23 23:57:27.735510288 +0000 UTC m=+47.435090271" lastFinishedPulling="2026-04-23 23:57:36.763300536 +0000 UTC m=+56.462880519" observedRunningTime="2026-04-23 23:57:37.701080443 +0000 UTC m=+57.400660466" watchObservedRunningTime="2026-04-23 23:57:37.70415515 +0000 UTC m=+57.403735173" Apr 23 23:57:54.166185 kubelet[2771]: I0423 23:57:54.165642 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:58:19.515766 systemd[1]: Started sshd@7-135.181.100.135:22-20.229.252.112:43014.service - OpenSSH per-connection server daemon (20.229.252.112:43014). Apr 23 23:58:19.718650 sshd[5389]: Accepted publickey for core from 20.229.252.112 port 43014 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:19.722114 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:19.731665 systemd-logind[1590]: New session 8 of user core. Apr 23 23:58:19.738116 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 23 23:58:19.924640 sshd[5392]: Connection closed by 20.229.252.112 port 43014 Apr 23 23:58:19.926184 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:19.933246 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Apr 23 23:58:19.934761 systemd[1]: sshd@7-135.181.100.135:22-20.229.252.112:43014.service: Deactivated successfully. Apr 23 23:58:19.939499 systemd[1]: session-8.scope: Deactivated successfully. Apr 23 23:58:19.942867 systemd-logind[1590]: Removed session 8. Apr 23 23:58:24.965751 systemd[1]: Started sshd@8-135.181.100.135:22-20.229.252.112:43024.service - OpenSSH per-connection server daemon (20.229.252.112:43024). Apr 23 23:58:25.168945 sshd[5405]: Accepted publickey for core from 20.229.252.112 port 43024 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:25.170748 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:25.178974 systemd-logind[1590]: New session 9 of user core. Apr 23 23:58:25.182061 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 23:58:25.346209 sshd[5408]: Connection closed by 20.229.252.112 port 43024 Apr 23 23:58:25.347392 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:25.351556 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Apr 23 23:58:25.352160 systemd[1]: sshd@8-135.181.100.135:22-20.229.252.112:43024.service: Deactivated successfully. Apr 23 23:58:25.354773 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 23:58:25.356439 systemd-logind[1590]: Removed session 9. Apr 23 23:58:30.389106 systemd[1]: Started sshd@9-135.181.100.135:22-20.229.252.112:41752.service - OpenSSH per-connection server daemon (20.229.252.112:41752). Apr 23 23:58:30.598252 sshd[5421]: Accepted publickey for core from 20.229.252.112 port 41752 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:30.600473 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:30.606984 systemd-logind[1590]: New session 10 of user core. Apr 23 23:58:30.617100 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 23:58:30.754300 sshd[5424]: Connection closed by 20.229.252.112 port 41752 Apr 23 23:58:30.754833 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:30.758833 systemd[1]: sshd@9-135.181.100.135:22-20.229.252.112:41752.service: Deactivated successfully. Apr 23 23:58:30.761051 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 23:58:30.762128 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Apr 23 23:58:30.764303 systemd-logind[1590]: Removed session 10. Apr 23 23:58:35.800221 systemd[1]: Started sshd@10-135.181.100.135:22-20.229.252.112:41764.service - OpenSSH per-connection server daemon (20.229.252.112:41764). Apr 23 23:58:36.013100 sshd[5493]: Accepted publickey for core from 20.229.252.112 port 41764 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:36.015172 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:36.022768 systemd-logind[1590]: New session 11 of user core. Apr 23 23:58:36.028066 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 23:58:36.185997 sshd[5496]: Connection closed by 20.229.252.112 port 41764 Apr 23 23:58:36.186563 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:36.192398 systemd[1]: sshd@10-135.181.100.135:22-20.229.252.112:41764.service: Deactivated successfully. Apr 23 23:58:36.195860 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 23:58:36.197415 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Apr 23 23:58:36.199812 systemd-logind[1590]: Removed session 11. Apr 23 23:58:36.227184 systemd[1]: Started sshd@11-135.181.100.135:22-20.229.252.112:36432.service - OpenSSH per-connection server daemon (20.229.252.112:36432). Apr 23 23:58:36.431142 sshd[5509]: Accepted publickey for core from 20.229.252.112 port 36432 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:36.433602 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:36.443107 systemd-logind[1590]: New session 12 of user core. Apr 23 23:58:36.446234 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 23:58:36.624012 sshd[5512]: Connection closed by 20.229.252.112 port 36432 Apr 23 23:58:36.627081 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:36.632693 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Apr 23 23:58:36.633616 systemd[1]: sshd@11-135.181.100.135:22-20.229.252.112:36432.service: Deactivated successfully. Apr 23 23:58:36.638746 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 23:58:36.645296 systemd-logind[1590]: Removed session 12. Apr 23 23:58:36.664587 systemd[1]: Started sshd@12-135.181.100.135:22-20.229.252.112:36440.service - OpenSSH per-connection server daemon (20.229.252.112:36440). Apr 23 23:58:36.857949 sshd[5522]: Accepted publickey for core from 20.229.252.112 port 36440 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:36.860500 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:36.870982 systemd-logind[1590]: New session 13 of user core. Apr 23 23:58:36.878152 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 23:58:37.031482 sshd[5525]: Connection closed by 20.229.252.112 port 36440 Apr 23 23:58:37.033166 sshd-session[5522]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:37.039672 systemd[1]: sshd@12-135.181.100.135:22-20.229.252.112:36440.service: Deactivated successfully. Apr 23 23:58:37.043503 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 23:58:37.045589 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Apr 23 23:58:37.049200 systemd-logind[1590]: Removed session 13. Apr 23 23:58:42.070673 systemd[1]: Started sshd@13-135.181.100.135:22-20.229.252.112:36450.service - OpenSSH per-connection server daemon (20.229.252.112:36450). Apr 23 23:58:42.280620 sshd[5540]: Accepted publickey for core from 20.229.252.112 port 36450 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:42.283457 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:42.290706 systemd-logind[1590]: New session 14 of user core. Apr 23 23:58:42.298990 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 23:58:42.464976 sshd[5543]: Connection closed by 20.229.252.112 port 36450 Apr 23 23:58:42.465502 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:42.471528 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Apr 23 23:58:42.473376 systemd[1]: sshd@13-135.181.100.135:22-20.229.252.112:36450.service: Deactivated successfully. Apr 23 23:58:42.475629 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 23:58:42.479242 systemd-logind[1590]: Removed session 14. Apr 23 23:58:42.504067 systemd[1]: Started sshd@14-135.181.100.135:22-20.229.252.112:36452.service - OpenSSH per-connection server daemon (20.229.252.112:36452). Apr 23 23:58:42.690772 sshd[5554]: Accepted publickey for core from 20.229.252.112 port 36452 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:42.692062 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:42.695696 systemd-logind[1590]: New session 15 of user core. Apr 23 23:58:42.702008 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 23:58:43.095218 sshd[5557]: Connection closed by 20.229.252.112 port 36452 Apr 23 23:58:43.097113 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:43.106619 systemd[1]: sshd@14-135.181.100.135:22-20.229.252.112:36452.service: Deactivated successfully. Apr 23 23:58:43.113075 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 23:58:43.120257 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Apr 23 23:58:43.121523 systemd-logind[1590]: Removed session 15. Apr 23 23:58:43.132196 systemd[1]: Started sshd@15-135.181.100.135:22-20.229.252.112:36460.service - OpenSSH per-connection server daemon (20.229.252.112:36460). Apr 23 23:58:43.329958 sshd[5567]: Accepted publickey for core from 20.229.252.112 port 36460 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:43.332558 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:43.343336 systemd-logind[1590]: New session 16 of user core. Apr 23 23:58:43.349118 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 23:58:43.906509 sshd[5570]: Connection closed by 20.229.252.112 port 36460 Apr 23 23:58:43.907764 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:43.912062 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Apr 23 23:58:43.912426 systemd[1]: sshd@15-135.181.100.135:22-20.229.252.112:36460.service: Deactivated successfully. Apr 23 23:58:43.914677 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 23:58:43.916626 systemd-logind[1590]: Removed session 16. Apr 23 23:58:43.947149 systemd[1]: Started sshd@16-135.181.100.135:22-20.229.252.112:36466.service - OpenSSH per-connection server daemon (20.229.252.112:36466). Apr 23 23:58:44.137329 sshd[5585]: Accepted publickey for core from 20.229.252.112 port 36466 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:44.139733 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:44.149837 systemd-logind[1590]: New session 17 of user core. Apr 23 23:58:44.158147 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 23:58:44.412526 sshd[5588]: Connection closed by 20.229.252.112 port 36466 Apr 23 23:58:44.413181 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:44.417635 systemd[1]: sshd@16-135.181.100.135:22-20.229.252.112:36466.service: Deactivated successfully. Apr 23 23:58:44.419024 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Apr 23 23:58:44.419562 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 23:58:44.421277 systemd-logind[1590]: Removed session 17. Apr 23 23:58:44.451058 systemd[1]: Started sshd@17-135.181.100.135:22-20.229.252.112:36478.service - OpenSSH per-connection server daemon (20.229.252.112:36478). Apr 23 23:58:44.635925 sshd[5598]: Accepted publickey for core from 20.229.252.112 port 36478 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:44.637489 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:44.644079 systemd-logind[1590]: New session 18 of user core. Apr 23 23:58:44.649992 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 23:58:44.771520 sshd[5601]: Connection closed by 20.229.252.112 port 36478 Apr 23 23:58:44.771674 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:44.776015 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Apr 23 23:58:44.776350 systemd[1]: sshd@17-135.181.100.135:22-20.229.252.112:36478.service: Deactivated successfully. Apr 23 23:58:44.778367 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 23:58:44.780279 systemd-logind[1590]: Removed session 18. Apr 23 23:58:49.818846 systemd[1]: Started sshd@18-135.181.100.135:22-20.229.252.112:44512.service - OpenSSH per-connection server daemon (20.229.252.112:44512). Apr 23 23:58:50.033986 sshd[5647]: Accepted publickey for core from 20.229.252.112 port 44512 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:50.038486 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:50.047873 systemd-logind[1590]: New session 19 of user core. Apr 23 23:58:50.054062 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 23:58:50.226726 sshd[5650]: Connection closed by 20.229.252.112 port 44512 Apr 23 23:58:50.229225 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:50.236029 systemd[1]: sshd@18-135.181.100.135:22-20.229.252.112:44512.service: Deactivated successfully. Apr 23 23:58:50.242016 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 23:58:50.247597 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Apr 23 23:58:50.249764 systemd-logind[1590]: Removed session 19. Apr 23 23:58:55.281254 systemd[1]: Started sshd@19-135.181.100.135:22-20.229.252.112:44528.service - OpenSSH per-connection server daemon (20.229.252.112:44528). Apr 23 23:58:55.492076 sshd[5702]: Accepted publickey for core from 20.229.252.112 port 44528 ssh2: RSA SHA256:CJukKVYD4j/mY5InY6hVIBK/YO021iJD6DQAJnGodf4 Apr 23 23:58:55.495455 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:58:55.504814 systemd-logind[1590]: New session 20 of user core. Apr 23 23:58:55.512148 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 23:58:55.676993 sshd[5708]: Connection closed by 20.229.252.112 port 44528 Apr 23 23:58:55.677910 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Apr 23 23:58:55.683213 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Apr 23 23:58:55.684050 systemd[1]: sshd@19-135.181.100.135:22-20.229.252.112:44528.service: Deactivated successfully. Apr 23 23:58:55.686571 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 23:58:55.688695 systemd-logind[1590]: Removed session 20. Apr 23 23:59:43.929516 kubelet[2771]: E0423 23:59:43.929381 2771 controller.go:195] "Failed to update lease" err="Put \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 23 23:59:44.374696 kubelet[2771]: E0423 23:59:44.374608 2771 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53308->10.0.0.2:2379: read: connection timed out" Apr 23 23:59:44.470659 systemd[1]: cri-containerd-7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08.scope: Deactivated successfully. Apr 23 23:59:44.472475 systemd[1]: cri-containerd-7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08.scope: Consumed 3.087s CPU time, 61.4M memory peak. Apr 23 23:59:44.488336 containerd[1624]: time="2026-04-23T23:59:44.488267277Z" level=info msg="received container exit event container_id:\"7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08\" id:\"7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08\" pid:2625 exit_status:1 exited_at:{seconds:1776988784 nanos:487818242}" Apr 23 23:59:44.521873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08-rootfs.mount: Deactivated successfully. Apr 23 23:59:45.051751 kubelet[2771]: I0423 23:59:45.051695 2771 scope.go:117] "RemoveContainer" containerID="7d1ecc9232c24cbbfe8b5da4a6925d5f599e09021b8ccfe4dfad02956a75df08" Apr 23 23:59:45.056743 containerd[1624]: time="2026-04-23T23:59:45.056685564Z" level=info msg="CreateContainer within sandbox \"11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 23 23:59:45.060223 systemd[1]: cri-containerd-992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26.scope: Deactivated successfully. Apr 23 23:59:45.061858 systemd[1]: cri-containerd-992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26.scope: Consumed 7.131s CPU time, 134.1M memory peak, 384K read from disk. Apr 23 23:59:45.066698 containerd[1624]: time="2026-04-23T23:59:45.066584447Z" level=info msg="received container exit event container_id:\"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\" id:\"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\" pid:3100 exit_status:1 exited_at:{seconds:1776988785 nanos:66213754}" Apr 23 23:59:45.088919 containerd[1624]: time="2026-04-23T23:59:45.087357415Z" level=info msg="Container b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:59:45.099125 containerd[1624]: time="2026-04-23T23:59:45.099103358Z" level=info msg="CreateContainer within sandbox \"11b5de9f6da4ec944a0a4d541ea0e80b133aa6df6b3afdd0338113ecc6437787\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba\"" Apr 23 23:59:45.099835 containerd[1624]: time="2026-04-23T23:59:45.099794585Z" level=info msg="StartContainer for \"b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba\"" Apr 23 23:59:45.101393 containerd[1624]: time="2026-04-23T23:59:45.101360032Z" level=info msg="connecting to shim b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba" address="unix:///run/containerd/s/fcbc11bd2c0111f7394e7ff627dd451dbae91a9c01cab6d9fafc4c55f40c8ed1" protocol=ttrpc version=3 Apr 23 23:59:45.116240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26-rootfs.mount: Deactivated successfully. Apr 23 23:59:45.124978 systemd[1]: Started cri-containerd-b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba.scope - libcontainer container b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba. Apr 23 23:59:45.167567 containerd[1624]: time="2026-04-23T23:59:45.167523536Z" level=info msg="StartContainer for \"b162316352acfe5085c4575d6f3b4959af8af00e68f0f95de120ebdf353f9cba\" returns successfully" Apr 23 23:59:46.053646 kubelet[2771]: I0423 23:59:46.053587 2771 scope.go:117] "RemoveContainer" containerID="992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26" Apr 23 23:59:46.055047 containerd[1624]: time="2026-04-23T23:59:46.055007248Z" level=info msg="CreateContainer within sandbox \"1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 23 23:59:46.063917 containerd[1624]: time="2026-04-23T23:59:46.062384476Z" level=info msg="Container fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:59:46.069490 containerd[1624]: time="2026-04-23T23:59:46.069446171Z" level=info msg="CreateContainer within sandbox \"1d6f9145dfa06e883892d76859736f05230b529403047c46d62c0a76c5a80b94\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed\"" Apr 23 23:59:46.071503 containerd[1624]: time="2026-04-23T23:59:46.071474653Z" level=info msg="StartContainer for \"fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed\"" Apr 23 23:59:46.072937 containerd[1624]: time="2026-04-23T23:59:46.072872848Z" level=info msg="connecting to shim fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed" address="unix:///run/containerd/s/388ee0235232fa18cd35f0b24e5e4dac77cbf34055406f6c2e15c2b4e7f27ebf" protocol=ttrpc version=3 Apr 23 23:59:46.097045 systemd[1]: Started cri-containerd-fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed.scope - libcontainer container fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed. Apr 23 23:59:46.134279 containerd[1624]: time="2026-04-23T23:59:46.134233157Z" level=info msg="StartContainer for \"fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed\" returns successfully" Apr 23 23:59:49.421355 systemd[1]: cri-containerd-1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe.scope: Deactivated successfully. Apr 23 23:59:49.423064 systemd[1]: cri-containerd-1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe.scope: Consumed 1.648s CPU time, 24.2M memory peak, 164K read from disk. Apr 23 23:59:49.423852 containerd[1624]: time="2026-04-23T23:59:49.423732306Z" level=info msg="received container exit event container_id:\"1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe\" id:\"1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe\" pid:2611 exit_status:1 exited_at:{seconds:1776988789 nanos:422869637}" Apr 23 23:59:49.465411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe-rootfs.mount: Deactivated successfully. Apr 23 23:59:50.075933 kubelet[2771]: I0423 23:59:50.075863 2771 scope.go:117] "RemoveContainer" containerID="1b3494c509f246dd30d88a256465997b3a59bc258e99ba444f7652c632426fbe" Apr 23 23:59:50.078738 containerd[1624]: time="2026-04-23T23:59:50.078663997Z" level=info msg="CreateContainer within sandbox \"09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 23 23:59:50.093113 containerd[1624]: time="2026-04-23T23:59:50.093051695Z" level=info msg="Container 263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:59:50.107771 containerd[1624]: time="2026-04-23T23:59:50.107709166Z" level=info msg="CreateContainer within sandbox \"09ef4bab4cb8bcd0d6a5dd224b325d0c277e5ba6366a376a56c66b61d3d1191d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe\"" Apr 23 23:59:50.108936 containerd[1624]: time="2026-04-23T23:59:50.108559965Z" level=info msg="StartContainer for \"263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe\"" Apr 23 23:59:50.110337 containerd[1624]: time="2026-04-23T23:59:50.110291424Z" level=info msg="connecting to shim 263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe" address="unix:///run/containerd/s/9215d84004655873ab6fedd0dc7c0bcbdf056c24fce3055afef9272324e95604" protocol=ttrpc version=3 Apr 23 23:59:50.146119 systemd[1]: Started cri-containerd-263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe.scope - libcontainer container 263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe. Apr 23 23:59:50.210932 containerd[1624]: time="2026-04-23T23:59:50.210840959Z" level=info msg="StartContainer for \"263ff53956b9b020e589889b2f7909f5ec7145d1434e357dfe47ea6f165c0bbe\" returns successfully" Apr 23 23:59:53.353782 systemd[1]: cri-containerd-fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed.scope: Deactivated successfully. Apr 23 23:59:53.355021 containerd[1624]: time="2026-04-23T23:59:53.353874848Z" level=info msg="received container exit event container_id:\"fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed\" id:\"fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed\" pid:5960 exit_status:1 exited_at:{seconds:1776988793 nanos:353326482}" Apr 23 23:59:53.354354 systemd[1]: cri-containerd-fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed.scope: Consumed 201ms CPU time, 40.8M memory peak, 1.3M read from disk. Apr 23 23:59:53.406965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed-rootfs.mount: Deactivated successfully. Apr 23 23:59:54.101178 kubelet[2771]: I0423 23:59:54.101129 2771 scope.go:117] "RemoveContainer" containerID="992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26" Apr 23 23:59:54.102187 kubelet[2771]: I0423 23:59:54.101745 2771 scope.go:117] "RemoveContainer" containerID="fd36c6debcc17d093e0626fedbc47526d3101eaa97a37bb3c8e577c83a2855ed" Apr 23 23:59:54.102187 kubelet[2771]: E0423 23:59:54.102042 2771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6fb8d665dd-b8z89_tigera-operator(1c50a3df-997c-4e82-b06b-cd945cbcc1a7)\"" pod="tigera-operator/tigera-operator-6fb8d665dd-b8z89" podUID="1c50a3df-997c-4e82-b06b-cd945cbcc1a7" Apr 23 23:59:54.110657 containerd[1624]: time="2026-04-23T23:59:54.110598264Z" level=info msg="RemoveContainer for \"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\"" Apr 23 23:59:54.121235 containerd[1624]: time="2026-04-23T23:59:54.121135874Z" level=info msg="RemoveContainer for \"992bff5d52bc4b17106bae89e65e8cd6513e8ae7c20a59825bc42b9f73543b26\" returns successfully" Apr 23 23:59:54.375170 kubelet[2771]: E0423 23:59:54.374981 2771 controller.go:195] "Failed to update lease" err="Put \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": context deadline exceeded" Apr 24 00:00:04.376139 kubelet[2771]: E0424 00:00:04.376023 2771 controller.go:195] "Failed to update lease" err="Put \"https://135.181.100.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-19674e2966?timeout=10s\": context deadline exceeded"