Apr 16 23:55:19.869622 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 16 23:55:19.869642 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:55:19.869649 kernel: BIOS-provided physical RAM map: Apr 16 23:55:19.869655 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:55:19.869662 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 16 23:55:19.869666 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:55:19.869672 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:55:19.869676 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:55:19.869681 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:55:19.869686 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:55:19.869691 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:55:19.869696 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:55:19.869700 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:55:19.869708 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:55:19.869713 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:55:19.869718 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:55:19.869723 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:55:19.869728 kernel: NX (Execute Disable) protection: active Apr 16 23:55:19.869735 kernel: APIC: Static calls initialized Apr 16 23:55:19.869740 kernel: e820: update [mem 0x7dfab018-0x7dfb4a57] usable ==> usable Apr 16 23:55:19.869746 kernel: e820: update [mem 0x7df6f018-0x7dfaa657] usable ==> usable Apr 16 23:55:19.869750 kernel: e820: update [mem 0x7dc01018-0x7dc3c657] usable ==> usable Apr 16 23:55:19.869755 kernel: extended physical RAM map: Apr 16 23:55:19.869760 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 23:55:19.869765 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007dc01017] usable Apr 16 23:55:19.869770 kernel: reserve setup_data: [mem 0x000000007dc01018-0x000000007dc3c657] usable Apr 16 23:55:19.869775 kernel: reserve setup_data: [mem 0x000000007dc3c658-0x000000007df6f017] usable Apr 16 23:55:19.869780 kernel: reserve setup_data: [mem 0x000000007df6f018-0x000000007dfaa657] usable Apr 16 23:55:19.869785 kernel: reserve setup_data: [mem 0x000000007dfaa658-0x000000007dfab017] usable Apr 16 23:55:19.869792 kernel: reserve setup_data: [mem 0x000000007dfab018-0x000000007dfb4a57] usable Apr 16 23:55:19.869797 kernel: reserve setup_data: [mem 0x000000007dfb4a58-0x000000007ed3efff] usable Apr 16 23:55:19.869802 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 16 23:55:19.869807 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 16 23:55:19.869811 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Apr 16 23:55:19.869816 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 16 23:55:19.869821 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 16 23:55:19.869826 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 16 23:55:19.869831 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 16 23:55:19.869836 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 23:55:19.869841 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 23:55:19.869851 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 23:55:19.869856 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 16 23:55:19.869861 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 16 23:55:19.869866 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 16 23:55:19.869871 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e01b198 RNG=0x7fb73018 Apr 16 23:55:19.869879 kernel: random: crng init done Apr 16 23:55:19.869884 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 16 23:55:19.869889 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 16 23:55:19.869894 kernel: secureboot: Secure boot disabled Apr 16 23:55:19.869899 kernel: SMBIOS 3.0.0 present. Apr 16 23:55:19.869904 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 16 23:55:19.869910 kernel: DMI: Memory slots populated: 1/1 Apr 16 23:55:19.869915 kernel: Hypervisor detected: KVM Apr 16 23:55:19.869920 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:55:19.869925 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 23:55:19.869930 kernel: kvm-clock: using sched offset of 13854961406 cycles Apr 16 23:55:19.869937 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 23:55:19.869943 kernel: tsc: Detected 2396.398 MHz processor Apr 16 23:55:19.869948 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 23:55:19.869954 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 23:55:19.869959 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 16 23:55:19.869964 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 23:55:19.869970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 23:55:19.869975 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 16 23:55:19.869980 kernel: Using GB pages for direct mapping Apr 16 23:55:19.869988 kernel: ACPI: Early table checksum verification disabled Apr 16 23:55:19.869993 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 16 23:55:19.869998 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 23:55:19.870003 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870008 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870014 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 16 23:55:19.870019 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870024 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870029 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870037 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 23:55:19.870042 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 23:55:19.870047 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 16 23:55:19.870052 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 16 23:55:19.870058 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 16 23:55:19.870063 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 16 23:55:19.870068 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 16 23:55:19.870073 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 16 23:55:19.870078 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 16 23:55:19.870086 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 16 23:55:19.870091 kernel: No NUMA configuration found Apr 16 23:55:19.870096 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 16 23:55:19.870102 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Apr 16 23:55:19.870107 kernel: Zone ranges: Apr 16 23:55:19.870112 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 23:55:19.870117 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 16 23:55:19.870123 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:55:19.870144 kernel: Device empty Apr 16 23:55:19.870150 kernel: Movable zone start for each node Apr 16 23:55:19.870165 kernel: Early memory node ranges Apr 16 23:55:19.870171 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 23:55:19.870176 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 16 23:55:19.870181 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 16 23:55:19.870186 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 16 23:55:19.870191 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 16 23:55:19.870197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 16 23:55:19.870202 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 23:55:19.870207 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 23:55:19.870215 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 16 23:55:19.870220 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 16 23:55:19.870225 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 16 23:55:19.870241 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 16 23:55:19.870246 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 23:55:19.870251 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 23:55:19.870257 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 23:55:19.870262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 23:55:19.870267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 23:55:19.870275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 23:55:19.870280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 23:55:19.870285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 23:55:19.870291 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 23:55:19.870296 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 23:55:19.870301 kernel: CPU topo: Max. logical packages: 1 Apr 16 23:55:19.870307 kernel: CPU topo: Max. logical dies: 1 Apr 16 23:55:19.870320 kernel: CPU topo: Max. dies per package: 1 Apr 16 23:55:19.870325 kernel: CPU topo: Max. threads per core: 1 Apr 16 23:55:19.870331 kernel: CPU topo: Num. cores per package: 2 Apr 16 23:55:19.870336 kernel: CPU topo: Num. threads per package: 2 Apr 16 23:55:19.870342 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 16 23:55:19.870349 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 23:55:19.870354 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 16 23:55:19.870360 kernel: Booting paravirtualized kernel on KVM Apr 16 23:55:19.870365 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 23:55:19.870371 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 16 23:55:19.870379 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 16 23:55:19.870384 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 16 23:55:19.870389 kernel: pcpu-alloc: [0] 0 1 Apr 16 23:55:19.870395 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 16 23:55:19.870401 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:55:19.870406 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:55:19.870412 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:55:19.870417 kernel: Fallback order for Node 0: 0 Apr 16 23:55:19.870425 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Apr 16 23:55:19.870430 kernel: Policy zone: Normal Apr 16 23:55:19.870436 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:55:19.870442 kernel: software IO TLB: area num 2. Apr 16 23:55:19.870447 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:55:19.870452 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 23:55:19.870463 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 23:55:19.870468 kernel: Dynamic Preempt: voluntary Apr 16 23:55:19.870474 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:55:19.870482 kernel: rcu: RCU event tracing is enabled. Apr 16 23:55:19.870488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:55:19.870493 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:55:19.870499 kernel: Rude variant of Tasks RCU enabled. Apr 16 23:55:19.870504 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:55:19.870510 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:55:19.870515 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:55:19.870521 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:55:19.870526 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:55:19.870532 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:55:19.870540 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 16 23:55:19.870545 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:55:19.870551 kernel: Console: colour dummy device 80x25 Apr 16 23:55:19.870557 kernel: printk: legacy console [tty0] enabled Apr 16 23:55:19.870562 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:55:19.870568 kernel: ACPI: Core revision 20240827 Apr 16 23:55:19.870573 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 23:55:19.870579 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 23:55:19.870584 kernel: x2apic enabled Apr 16 23:55:19.870592 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 23:55:19.870597 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 23:55:19.870603 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x228aecd6e18, max_idle_ns: 440795270957 ns Apr 16 23:55:19.870608 kernel: Calibrating delay loop (skipped) preset value.. 4792.79 BogoMIPS (lpj=2396398) Apr 16 23:55:19.870614 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 23:55:19.870619 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 16 23:55:19.870625 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 16 23:55:19.870631 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 23:55:19.870639 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 16 23:55:19.870644 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 16 23:55:19.870650 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 16 23:55:19.870655 kernel: active return thunk: srso_alias_return_thunk Apr 16 23:55:19.870661 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 16 23:55:19.870666 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 16 23:55:19.870672 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 23:55:19.870677 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 23:55:19.870683 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 23:55:19.870690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 23:55:19.870696 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 23:55:19.870701 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 23:55:19.870707 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 23:55:19.870712 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 16 23:55:19.870718 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 23:55:19.870723 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 23:55:19.870729 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 23:55:19.870734 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 23:55:19.870741 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 16 23:55:19.870747 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 16 23:55:19.870752 kernel: Freeing SMP alternatives memory: 32K Apr 16 23:55:19.870758 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:55:19.870763 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:55:19.870769 kernel: landlock: Up and running. Apr 16 23:55:19.870774 kernel: SELinux: Initializing. Apr 16 23:55:19.870780 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:55:19.870785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:55:19.870793 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 16 23:55:19.870798 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 16 23:55:19.870804 kernel: ... version: 0 Apr 16 23:55:19.870809 kernel: ... bit width: 48 Apr 16 23:55:19.870815 kernel: ... generic registers: 6 Apr 16 23:55:19.870820 kernel: ... value mask: 0000ffffffffffff Apr 16 23:55:19.870826 kernel: ... max period: 00007fffffffffff Apr 16 23:55:19.870831 kernel: ... fixed-purpose events: 0 Apr 16 23:55:19.870836 kernel: ... event mask: 000000000000003f Apr 16 23:55:19.870844 kernel: signal: max sigframe size: 3376 Apr 16 23:55:19.870849 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:55:19.870855 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:55:19.870860 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:55:19.870866 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:55:19.870871 kernel: smpboot: x86: Booting SMP configuration: Apr 16 23:55:19.870877 kernel: .... node #0, CPUs: #1 Apr 16 23:55:19.870882 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:55:19.870887 kernel: smpboot: Total of 2 processors activated (9585.59 BogoMIPS) Apr 16 23:55:19.870895 kernel: Memory: 3813632K/4091168K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 271900K reserved, 0K cma-reserved) Apr 16 23:55:19.870901 kernel: devtmpfs: initialized Apr 16 23:55:19.870906 kernel: x86/mm: Memory block size: 128MB Apr 16 23:55:19.870912 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 16 23:55:19.870917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:55:19.870923 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:55:19.870928 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:55:19.870934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:55:19.870939 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:55:19.870947 kernel: audit: type=2000 audit(1776383716.399:1): state=initialized audit_enabled=0 res=1 Apr 16 23:55:19.870952 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:55:19.870958 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 23:55:19.870963 kernel: cpuidle: using governor menu Apr 16 23:55:19.870968 kernel: efi: Freeing EFI boot services memory: 34884K Apr 16 23:55:19.870974 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:55:19.870979 kernel: dca service started, version 1.12.1 Apr 16 23:55:19.870985 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 16 23:55:19.870991 kernel: PCI: Using configuration type 1 for base access Apr 16 23:55:19.870998 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 23:55:19.871004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:55:19.871009 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:55:19.871015 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:55:19.871020 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:55:19.871026 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:55:19.871031 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:55:19.871037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:55:19.871042 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:55:19.871050 kernel: ACPI: Interpreter enabled Apr 16 23:55:19.871055 kernel: ACPI: PM: (supports S0 S5) Apr 16 23:55:19.871060 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 23:55:19.871066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 23:55:19.871071 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 23:55:19.871077 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 23:55:19.871082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 23:55:19.871439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:55:19.871550 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 23:55:19.871649 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 23:55:19.871656 kernel: PCI host bridge to bus 0000:00 Apr 16 23:55:19.871756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 23:55:19.871847 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 23:55:19.871936 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 23:55:19.872024 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 16 23:55:19.872114 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 16 23:55:19.872216 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:55:19.872315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 23:55:19.872426 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:55:19.872535 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Apr 16 23:55:19.872632 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Apr 16 23:55:19.872730 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Apr 16 23:55:19.872836 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Apr 16 23:55:19.872932 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 16 23:55:19.873028 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 23:55:19.873147 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.873256 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Apr 16 23:55:19.873352 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:55:19.873451 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:55:19.873547 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:55:19.873649 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.873744 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Apr 16 23:55:19.873839 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:55:19.873934 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:55:19.874035 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.874485 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Apr 16 23:55:19.874598 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:55:19.874697 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:55:19.874794 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:55:19.874896 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.874992 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Apr 16 23:55:19.875087 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:55:19.875203 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:55:19.875320 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.875417 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Apr 16 23:55:19.875512 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:55:19.875606 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:55:19.875701 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:55:19.875805 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.875905 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Apr 16 23:55:19.876003 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:55:19.876098 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:55:19.876245 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:55:19.876350 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.876446 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Apr 16 23:55:19.876542 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:55:19.876640 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:55:19.876735 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:55:19.876836 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.876943 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Apr 16 23:55:19.877040 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:55:19.877147 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:55:19.877257 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:55:19.877371 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 16 23:55:19.877468 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Apr 16 23:55:19.877563 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:55:19.877658 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:55:19.877755 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:55:19.877857 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 23:55:19.877956 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 23:55:19.878057 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 23:55:19.878190 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Apr 16 23:55:19.878297 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Apr 16 23:55:19.878401 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 23:55:19.878497 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Apr 16 23:55:19.878608 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:55:19.878712 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Apr 16 23:55:19.878812 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Apr 16 23:55:19.878911 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:55:19.879006 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:55:19.879111 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 16 23:55:19.879223 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Apr 16 23:55:19.879327 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:55:19.879437 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 16 23:55:19.879537 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Apr 16 23:55:19.879651 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Apr 16 23:55:19.879746 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:55:19.882194 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:55:19.882324 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Apr 16 23:55:19.882429 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:55:19.882538 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 16 23:55:19.882640 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Apr 16 23:55:19.882741 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Apr 16 23:55:19.882838 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:55:19.882949 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 16 23:55:19.883050 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Apr 16 23:55:19.883840 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Apr 16 23:55:19.883949 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:55:19.883958 kernel: acpiphp: Slot [0] registered Apr 16 23:55:19.884090 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 16 23:55:19.884208 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Apr 16 23:55:19.884321 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Apr 16 23:55:19.884421 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 16 23:55:19.884522 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:55:19.884529 kernel: acpiphp: Slot [0-2] registered Apr 16 23:55:19.884626 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:55:19.884634 kernel: acpiphp: Slot [0-3] registered Apr 16 23:55:19.884731 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:55:19.884754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 23:55:19.884760 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 23:55:19.884766 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 23:55:19.884774 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 23:55:19.884780 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 23:55:19.884788 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 23:55:19.884794 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 23:55:19.884799 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 23:55:19.884805 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 23:55:19.884811 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 23:55:19.884817 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 23:55:19.884823 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 23:55:19.884830 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 23:55:19.884836 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 23:55:19.884845 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 23:55:19.884850 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 23:55:19.884856 kernel: iommu: Default domain type: Translated Apr 16 23:55:19.884862 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 23:55:19.884870 kernel: efivars: Registered efivars operations Apr 16 23:55:19.884876 kernel: PCI: Using ACPI for IRQ routing Apr 16 23:55:19.884881 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 23:55:19.884888 kernel: e820: reserve RAM buffer [mem 0x7dc01018-0x7fffffff] Apr 16 23:55:19.884893 kernel: e820: reserve RAM buffer [mem 0x7df6f018-0x7fffffff] Apr 16 23:55:19.884899 kernel: e820: reserve RAM buffer [mem 0x7dfab018-0x7fffffff] Apr 16 23:55:19.884905 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 16 23:55:19.884911 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 16 23:55:19.884916 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 16 23:55:19.884924 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 16 23:55:19.885023 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 23:55:19.885118 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 23:55:19.885226 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 23:55:19.885241 kernel: vgaarb: loaded Apr 16 23:55:19.885248 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 23:55:19.885254 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 23:55:19.885259 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 23:55:19.885268 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:55:19.885276 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:55:19.885282 kernel: pnp: PnP ACPI init Apr 16 23:55:19.885391 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 16 23:55:19.885399 kernel: pnp: PnP ACPI: found 5 devices Apr 16 23:55:19.885405 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 23:55:19.885411 kernel: NET: Registered PF_INET protocol family Apr 16 23:55:19.885417 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:55:19.885423 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:55:19.885432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:55:19.885438 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:55:19.885444 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:55:19.885450 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:55:19.885456 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:55:19.885462 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:55:19.885467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:55:19.885473 kernel: NET: Registered PF_XDP protocol family Apr 16 23:55:19.885580 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:55:19.885703 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 23:55:19.885804 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 23:55:19.887114 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 23:55:19.887251 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 23:55:19.887352 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Apr 16 23:55:19.887448 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Apr 16 23:55:19.887545 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Apr 16 23:55:19.887651 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Apr 16 23:55:19.887749 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 16 23:55:19.887860 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 16 23:55:19.887957 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:55:19.888057 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 16 23:55:19.888172 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 16 23:55:19.888281 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 16 23:55:19.888377 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 16 23:55:19.888473 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:55:19.888572 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 16 23:55:19.888668 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:55:19.888763 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 16 23:55:19.888858 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 16 23:55:19.888963 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:55:19.889061 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 16 23:55:19.890691 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 16 23:55:19.890801 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:55:19.890905 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Apr 16 23:55:19.891006 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 16 23:55:19.891101 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 16 23:55:19.891212 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 16 23:55:19.891324 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:55:19.891421 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 16 23:55:19.891520 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 16 23:55:19.891616 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 16 23:55:19.891714 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:55:19.891813 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 16 23:55:19.891908 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 16 23:55:19.892003 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 16 23:55:19.892098 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:55:19.893251 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 23:55:19.893359 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 23:55:19.893454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 23:55:19.893545 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 16 23:55:19.893635 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 16 23:55:19.894240 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 16 23:55:19.894352 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 16 23:55:19.894448 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 16 23:55:19.894552 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 16 23:55:19.894655 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 16 23:55:19.894749 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 16 23:55:19.894848 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 16 23:55:19.894950 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 16 23:55:19.895044 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 16 23:55:19.896181 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 16 23:55:19.896307 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 16 23:55:19.896410 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 16 23:55:19.896504 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 16 23:55:19.896597 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 16 23:55:19.896696 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 16 23:55:19.896790 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 16 23:55:19.896882 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 16 23:55:19.896988 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 16 23:55:19.897081 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 16 23:55:19.897286 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 16 23:55:19.897321 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 23:55:19.897344 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:55:19.897370 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 16 23:55:19.897393 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 16 23:55:19.897426 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x228aecd6e18, max_idle_ns: 440795270957 ns Apr 16 23:55:19.897432 kernel: Initialise system trusted keyrings Apr 16 23:55:19.897438 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:55:19.897444 kernel: Key type asymmetric registered Apr 16 23:55:19.897450 kernel: Asymmetric key parser 'x509' registered Apr 16 23:55:19.897456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 23:55:19.897462 kernel: io scheduler mq-deadline registered Apr 16 23:55:19.897468 kernel: io scheduler kyber registered Apr 16 23:55:19.897474 kernel: io scheduler bfq registered Apr 16 23:55:19.898162 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 16 23:55:19.898286 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 16 23:55:19.898386 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 16 23:55:19.898487 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 16 23:55:19.898584 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 16 23:55:19.898681 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 16 23:55:19.898777 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 16 23:55:19.898873 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 16 23:55:19.898969 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 16 23:55:19.899067 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 16 23:55:19.900205 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 16 23:55:19.900325 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 16 23:55:19.900425 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 16 23:55:19.900525 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 16 23:55:19.900622 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 16 23:55:19.900718 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 16 23:55:19.900730 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 23:55:19.900827 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 16 23:55:19.900923 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 16 23:55:19.900931 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 23:55:19.900937 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 16 23:55:19.900943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:55:19.900949 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 23:55:19.900958 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 23:55:19.900963 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 23:55:19.900969 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 23:55:19.901071 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 16 23:55:19.901079 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 23:55:19.901183 kernel: rtc_cmos 00:03: registered as rtc0 Apr 16 23:55:19.901288 kernel: rtc_cmos 00:03: setting system clock to 2026-04-16T23:55:19 UTC (1776383719) Apr 16 23:55:19.901383 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 23:55:19.901391 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Apr 16 23:55:19.901398 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 16 23:55:19.901404 kernel: efifb: probing for efifb Apr 16 23:55:19.901410 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Apr 16 23:55:19.901416 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 16 23:55:19.901422 kernel: efifb: scrolling: redraw Apr 16 23:55:19.901428 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 16 23:55:19.901434 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:55:19.901443 kernel: fb0: EFI VGA frame buffer device Apr 16 23:55:19.901449 kernel: pstore: Using crash dump compression: deflate Apr 16 23:55:19.901455 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 23:55:19.901460 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:55:19.901466 kernel: Segment Routing with IPv6 Apr 16 23:55:19.901472 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:55:19.901478 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:55:19.901484 kernel: Key type dns_resolver registered Apr 16 23:55:19.901489 kernel: IPI shorthand broadcast: enabled Apr 16 23:55:19.901497 kernel: sched_clock: Marking stable (2906009628, 271434662)->(3222882420, -45438130) Apr 16 23:55:19.901503 kernel: registered taskstats version 1 Apr 16 23:55:19.901509 kernel: Loading compiled-in X.509 certificates Apr 16 23:55:19.901515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 16 23:55:19.901521 kernel: Demotion targets for Node 0: null Apr 16 23:55:19.901527 kernel: Key type .fscrypt registered Apr 16 23:55:19.901533 kernel: Key type fscrypt-provisioning registered Apr 16 23:55:19.901538 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:55:19.901544 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:55:19.901552 kernel: ima: No architecture policies found Apr 16 23:55:19.901558 kernel: clk: Disabling unused clocks Apr 16 23:55:19.901564 kernel: Warning: unable to open an initial console. Apr 16 23:55:19.901570 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 16 23:55:19.901576 kernel: Write protecting the kernel read-only data: 40960k Apr 16 23:55:19.901582 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 23:55:19.901588 kernel: Run /init as init process Apr 16 23:55:19.901593 kernel: with arguments: Apr 16 23:55:19.901599 kernel: /init Apr 16 23:55:19.901608 kernel: with environment: Apr 16 23:55:19.901613 kernel: HOME=/ Apr 16 23:55:19.901619 kernel: TERM=linux Apr 16 23:55:19.901626 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:55:19.901635 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:55:19.901641 systemd[1]: Detected virtualization kvm. Apr 16 23:55:19.901647 systemd[1]: Detected architecture x86-64. Apr 16 23:55:19.901655 systemd[1]: Running in initrd. Apr 16 23:55:19.901661 systemd[1]: No hostname configured, using default hostname. Apr 16 23:55:19.901668 systemd[1]: Hostname set to . Apr 16 23:55:19.901674 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:55:19.901680 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:55:19.901686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:19.901692 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:19.901699 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:55:19.901707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:55:19.901713 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:55:19.901720 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:55:19.901727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:55:19.901733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:55:19.901739 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:19.901745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:19.901754 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:55:19.901760 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:55:19.901766 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:55:19.901772 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:55:19.901778 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:55:19.901784 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:55:19.901791 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:55:19.901797 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:55:19.901803 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:19.901811 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:19.901818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:19.901824 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:55:19.901830 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:55:19.901836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:55:19.901842 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:55:19.901849 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:55:19.901855 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:55:19.901863 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:55:19.901870 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:55:19.901876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:19.901901 systemd-journald[197]: Collecting audit messages is disabled. Apr 16 23:55:19.901920 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:55:19.901927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:19.901933 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:55:19.901939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:55:19.901946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:55:19.901955 systemd-journald[197]: Journal started Apr 16 23:55:19.901969 systemd-journald[197]: Runtime Journal (/run/log/journal/105e646545204e29bed3b0222767b238) is 8M, max 76.1M, 68.1M free. Apr 16 23:55:19.864089 systemd-modules-load[199]: Inserted module 'overlay' Apr 16 23:55:19.907416 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:55:19.907438 kernel: Bridge firewalling registered Apr 16 23:55:19.908555 systemd-modules-load[199]: Inserted module 'br_netfilter' Apr 16 23:55:19.910278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:19.910850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:19.914224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:55:19.916035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:55:19.919237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:55:19.920416 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:55:19.922890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:55:19.933672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:19.934400 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:55:19.936749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:19.940673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:19.943259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:55:19.945339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:55:19.947741 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:55:19.961900 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 16 23:55:19.978891 systemd-resolved[235]: Positive Trust Anchors: Apr 16 23:55:19.978905 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:55:19.978925 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:55:19.981402 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 16 23:55:19.982970 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:55:19.983850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:20.034163 kernel: SCSI subsystem initialized Apr 16 23:55:20.042147 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:55:20.050151 kernel: iscsi: registered transport (tcp) Apr 16 23:55:20.066210 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:55:20.066241 kernel: QLogic iSCSI HBA Driver Apr 16 23:55:20.082444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:55:20.097064 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:20.098680 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:55:20.135388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:55:20.136825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:55:20.177187 kernel: raid6: avx512x4 gen() 44938 MB/s Apr 16 23:55:20.195157 kernel: raid6: avx512x2 gen() 45833 MB/s Apr 16 23:55:20.213148 kernel: raid6: avx512x1 gen() 43249 MB/s Apr 16 23:55:20.231174 kernel: raid6: avx2x4 gen() 46091 MB/s Apr 16 23:55:20.249172 kernel: raid6: avx2x2 gen() 48320 MB/s Apr 16 23:55:20.268246 kernel: raid6: avx2x1 gen() 37787 MB/s Apr 16 23:55:20.268318 kernel: raid6: using algorithm avx2x2 gen() 48320 MB/s Apr 16 23:55:20.288282 kernel: raid6: .... xor() 36977 MB/s, rmw enabled Apr 16 23:55:20.288360 kernel: raid6: using avx512x2 recovery algorithm Apr 16 23:55:20.304175 kernel: xor: automatically using best checksumming function avx Apr 16 23:55:20.433155 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:55:20.439345 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:55:20.440914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:20.465675 systemd-udevd[446]: Using default interface naming scheme 'v255'. Apr 16 23:55:20.470766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:20.472844 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:55:20.494645 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Apr 16 23:55:20.516511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:55:20.518192 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:55:20.588406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:20.594773 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:55:20.682155 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 23:55:20.686080 kernel: ACPI: bus type USB registered Apr 16 23:55:20.686104 kernel: usbcore: registered new interface driver usbfs Apr 16 23:55:20.703155 kernel: usbcore: registered new interface driver hub Apr 16 23:55:20.705161 kernel: libata version 3.00 loaded. Apr 16 23:55:20.710255 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 23:55:20.710441 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 23:55:20.717573 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 23:55:20.717721 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 23:55:20.717842 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 23:55:20.722568 kernel: usbcore: registered new device driver usb Apr 16 23:55:20.722588 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 16 23:55:20.723907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:20.723997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:20.725240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:20.726779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:20.731144 kernel: AES CTR mode by8 optimization enabled Apr 16 23:55:20.743153 kernel: scsi host0: Virtio SCSI HBA Apr 16 23:55:20.745148 kernel: scsi host1: ahci Apr 16 23:55:20.757708 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 23:55:20.757736 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 16 23:55:20.770047 kernel: scsi host2: ahci Apr 16 23:55:20.770268 kernel: scsi host3: ahci Apr 16 23:55:20.774957 kernel: scsi host4: ahci Apr 16 23:55:20.775161 kernel: scsi host5: ahci Apr 16 23:55:20.784992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:20.786256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:20.791245 kernel: scsi host6: ahci Apr 16 23:55:20.794267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:20.812155 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 43 lpm-pol 1 Apr 16 23:55:20.812170 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 43 lpm-pol 1 Apr 16 23:55:20.812178 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 43 lpm-pol 1 Apr 16 23:55:20.812186 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 43 lpm-pol 1 Apr 16 23:55:20.812194 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 43 lpm-pol 1 Apr 16 23:55:20.812202 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 43 lpm-pol 1 Apr 16 23:55:20.814078 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 16 23:55:20.814283 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 16 23:55:20.817920 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 16 23:55:20.818076 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 16 23:55:20.821300 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 16 23:55:20.835421 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:55:20.835445 kernel: GPT:17805311 != 160006143 Apr 16 23:55:20.835456 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:55:20.835469 kernel: GPT:17805311 != 160006143 Apr 16 23:55:20.835478 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:55:20.837260 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:55:20.840982 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 16 23:55:20.841426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:21.118516 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 23:55:21.118613 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 23:55:21.119185 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 16 23:55:21.128191 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 23:55:21.133204 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 23:55:21.139153 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:55:21.139199 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 23:55:21.142600 kernel: ata1.00: applying bridge limits Apr 16 23:55:21.150172 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 23:55:21.156974 kernel: ata1.00: LPM support broken, forcing max_power Apr 16 23:55:21.157010 kernel: ata1.00: configured for UDMA/100 Apr 16 23:55:21.164200 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 23:55:21.200257 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:55:21.209195 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 16 23:55:21.224169 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 23:55:21.233899 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 16 23:55:21.234255 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 16 23:55:21.237157 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 23:55:21.237341 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 16 23:55:21.237471 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 23:55:21.253146 kernel: hub 1-0:1.0: USB hub found Apr 16 23:55:21.256147 kernel: hub 1-0:1.0: 4 ports detected Apr 16 23:55:21.256313 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 16 23:55:21.261741 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 23:55:21.261909 kernel: hub 2-0:1.0: USB hub found Apr 16 23:55:21.263376 kernel: hub 2-0:1.0: 4 ports detected Apr 16 23:55:21.273501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 16 23:55:21.281444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 16 23:55:21.286835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 16 23:55:21.287193 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 16 23:55:21.294475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:55:21.296115 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:55:21.309189 disk-uuid[657]: Primary Header is updated. Apr 16 23:55:21.309189 disk-uuid[657]: Secondary Entries is updated. Apr 16 23:55:21.309189 disk-uuid[657]: Secondary Header is updated. Apr 16 23:55:21.319170 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:55:21.327192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:55:21.481637 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:55:21.482702 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:55:21.483272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:21.484174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:55:21.485740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:55:21.499146 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 23:55:21.505452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:55:21.646196 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:55:21.657741 kernel: usbcore: registered new interface driver usbhid Apr 16 23:55:21.657791 kernel: usbhid: USB HID core driver Apr 16 23:55:21.674701 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Apr 16 23:55:21.674767 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 16 23:55:22.337641 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 16 23:55:22.339924 disk-uuid[658]: The operation has completed successfully. Apr 16 23:55:22.424305 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:55:22.424406 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:55:22.437041 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:55:22.447926 sh[690]: Success Apr 16 23:55:22.463422 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:55:22.463455 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:55:22.467121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:55:22.476151 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 23:55:22.510985 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:55:22.513096 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:55:22.521479 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:55:22.530187 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (702) Apr 16 23:55:22.533434 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 16 23:55:22.533456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:55:22.545473 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:55:22.545500 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:55:22.545509 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:55:22.549066 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:55:22.550634 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:55:22.551755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:55:22.552851 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:55:22.553915 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:55:22.587165 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (737) Apr 16 23:55:22.592607 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:55:22.592631 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:55:22.601075 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:55:22.601100 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:55:22.601109 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:55:22.607153 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:55:22.607677 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:55:22.609298 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:55:22.680624 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:55:22.684340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:55:22.695101 ignition[802]: Ignition 2.22.0 Apr 16 23:55:22.695110 ignition[802]: Stage: fetch-offline Apr 16 23:55:22.696234 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:22.696252 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:22.696320 ignition[802]: parsed url from cmdline: "" Apr 16 23:55:22.696324 ignition[802]: no config URL provided Apr 16 23:55:22.696328 ignition[802]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:55:22.696335 ignition[802]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:55:22.696340 ignition[802]: failed to fetch config: resource requires networking Apr 16 23:55:22.699459 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:55:22.696531 ignition[802]: Ignition finished successfully Apr 16 23:55:22.719924 systemd-networkd[878]: lo: Link UP Apr 16 23:55:22.719933 systemd-networkd[878]: lo: Gained carrier Apr 16 23:55:22.722292 systemd-networkd[878]: Enumeration completed Apr 16 23:55:22.722369 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:55:22.722812 systemd[1]: Reached target network.target - Network. Apr 16 23:55:22.723472 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:22.723477 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:22.724869 systemd-networkd[878]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:22.724873 systemd-networkd[878]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:22.725384 systemd-networkd[878]: eth0: Link UP Apr 16 23:55:22.725577 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:55:22.725924 systemd-networkd[878]: eth1: Link UP Apr 16 23:55:22.726086 systemd-networkd[878]: eth0: Gained carrier Apr 16 23:55:22.726093 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:22.737887 systemd-networkd[878]: eth1: Gained carrier Apr 16 23:55:22.737898 systemd-networkd[878]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:22.752447 ignition[882]: Ignition 2.22.0 Apr 16 23:55:22.752456 ignition[882]: Stage: fetch Apr 16 23:55:22.752550 ignition[882]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:22.752557 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:22.752615 ignition[882]: parsed url from cmdline: "" Apr 16 23:55:22.752618 ignition[882]: no config URL provided Apr 16 23:55:22.752623 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:55:22.752630 ignition[882]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:55:22.752653 ignition[882]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 16 23:55:22.752805 ignition[882]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 23:55:22.766171 systemd-networkd[878]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:55:22.796212 systemd-networkd[878]: eth0: DHCPv4 address 77.42.25.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:55:22.953123 ignition[882]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 16 23:55:22.966340 ignition[882]: GET result: OK Apr 16 23:55:22.966459 ignition[882]: parsing config with SHA512: f97fdaf61d2bc136bbec18c4905d40d1e7896ecdc84591a436961b732a9287c73b1de3651977f0ed0ab47f3dc38caa1c5101ae925dfd186d29e8d1e53398adf3 Apr 16 23:55:22.972875 unknown[882]: fetched base config from "system" Apr 16 23:55:22.972896 unknown[882]: fetched base config from "system" Apr 16 23:55:22.973553 ignition[882]: fetch: fetch complete Apr 16 23:55:22.972909 unknown[882]: fetched user config from "hetzner" Apr 16 23:55:22.973566 ignition[882]: fetch: fetch passed Apr 16 23:55:22.973652 ignition[882]: Ignition finished successfully Apr 16 23:55:22.979740 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:55:22.983890 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:55:23.046210 ignition[890]: Ignition 2.22.0 Apr 16 23:55:23.046252 ignition[890]: Stage: kargs Apr 16 23:55:23.046462 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:23.046482 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:23.047567 ignition[890]: kargs: kargs passed Apr 16 23:55:23.047643 ignition[890]: Ignition finished successfully Apr 16 23:55:23.052547 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:55:23.056631 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:55:23.098355 ignition[896]: Ignition 2.22.0 Apr 16 23:55:23.098379 ignition[896]: Stage: disks Apr 16 23:55:23.098608 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:23.098627 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:23.100051 ignition[896]: disks: disks passed Apr 16 23:55:23.100161 ignition[896]: Ignition finished successfully Apr 16 23:55:23.103336 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:55:23.104430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:55:23.105050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:55:23.105428 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:55:23.106451 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:55:23.107440 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:55:23.110216 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:55:23.139893 systemd-fsck[905]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 16 23:55:23.143092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:55:23.145718 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:55:23.285178 kernel: EXT4-fs (sda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 16 23:55:23.285518 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:55:23.286425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:55:23.288380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:55:23.289422 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:55:23.293428 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 16 23:55:23.293808 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:55:23.293829 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:55:23.303700 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:55:23.306232 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:55:23.321433 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (913) Apr 16 23:55:23.329035 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:55:23.329057 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:55:23.344584 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:55:23.344612 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:55:23.351102 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:55:23.356411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:55:23.364345 coreos-metadata[915]: Apr 16 23:55:23.364 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 16 23:55:23.365116 coreos-metadata[915]: Apr 16 23:55:23.365 INFO Fetch successful Apr 16 23:55:23.366745 coreos-metadata[915]: Apr 16 23:55:23.366 INFO wrote hostname ci-4459-2-4-n-391826f4f6 to /sysroot/etc/hostname Apr 16 23:55:23.368021 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:55:23.372070 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:55:23.376629 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:55:23.380610 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:55:23.384217 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:55:23.475258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:55:23.476879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:55:23.477756 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:55:23.492155 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:55:23.505379 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:55:23.517082 ignition[1030]: INFO : Ignition 2.22.0 Apr 16 23:55:23.517082 ignition[1030]: INFO : Stage: mount Apr 16 23:55:23.519213 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:23.519213 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:23.519213 ignition[1030]: INFO : mount: mount passed Apr 16 23:55:23.519213 ignition[1030]: INFO : Ignition finished successfully Apr 16 23:55:23.521375 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:55:23.522610 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:55:23.527887 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:55:23.539557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:55:23.563204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1042) Apr 16 23:55:23.566181 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 16 23:55:23.566243 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 23:55:23.575114 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 16 23:55:23.575169 kernel: BTRFS info (device sda6): turning on async discard Apr 16 23:55:23.575179 kernel: BTRFS info (device sda6): enabling free space tree Apr 16 23:55:23.578916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:55:23.609873 ignition[1058]: INFO : Ignition 2.22.0 Apr 16 23:55:23.609873 ignition[1058]: INFO : Stage: files Apr 16 23:55:23.611380 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:23.611380 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:23.611380 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:55:23.612786 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:55:23.612786 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:55:23.615634 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:55:23.615953 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:55:23.616265 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:55:23.616119 unknown[1058]: wrote ssh authorized keys file for user: core Apr 16 23:55:23.618141 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:55:23.618725 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 23:55:23.758842 systemd-networkd[878]: eth0: Gained IPv6LL Apr 16 23:55:23.896382 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:55:24.299069 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 23:55:24.299069 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:55:24.302959 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 23:55:24.718623 systemd-networkd[878]: eth1: Gained IPv6LL Apr 16 23:55:24.822025 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:55:27.407455 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 23:55:27.408454 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:55:27.408784 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:55:27.409502 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:55:27.409502 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:55:27.409502 ignition[1058]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:55:27.415201 ignition[1058]: INFO : files: files passed Apr 16 23:55:27.415201 ignition[1058]: INFO : Ignition finished successfully Apr 16 23:55:27.420234 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:55:27.424364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:55:27.429480 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:55:27.444582 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:55:27.444813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:55:27.459688 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:27.459688 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:27.463328 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:55:27.465995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:55:27.467251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:55:27.469951 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:55:27.513905 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:55:27.514000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:55:27.515851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:55:27.516976 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:55:27.518457 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:55:27.520232 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:55:27.548460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:55:27.550393 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:55:27.572813 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:27.573374 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:27.575725 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:55:27.576970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:55:27.577104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:55:27.578267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:55:27.579080 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:55:27.579863 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:55:27.580660 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:55:27.581438 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:55:27.582197 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:55:27.582961 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:55:27.583787 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:55:27.584584 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:55:27.585364 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:55:27.586070 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:55:27.586847 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:55:27.586977 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:55:27.588058 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:27.588870 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:27.589577 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:55:27.589682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:27.590396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:55:27.590519 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:55:27.591617 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:55:27.591756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:55:27.592462 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:55:27.592558 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:55:27.593307 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 16 23:55:27.593429 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 16 23:55:27.596223 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:55:27.596873 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:55:27.596981 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:27.600269 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:55:27.601337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:55:27.601971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:27.603281 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:55:27.603784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:55:27.609235 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:55:27.609334 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:55:27.626600 ignition[1113]: INFO : Ignition 2.22.0 Apr 16 23:55:27.626600 ignition[1113]: INFO : Stage: umount Apr 16 23:55:27.630110 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:55:27.630110 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 16 23:55:27.630110 ignition[1113]: INFO : umount: umount passed Apr 16 23:55:27.630110 ignition[1113]: INFO : Ignition finished successfully Apr 16 23:55:27.628743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:55:27.629670 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:55:27.629758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:55:27.630826 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:55:27.630892 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:55:27.633219 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:55:27.633258 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:55:27.633697 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:55:27.633733 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:55:27.634060 systemd[1]: Stopped target network.target - Network. Apr 16 23:55:27.634464 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:55:27.634504 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:55:27.635109 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:55:27.635774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:55:27.639171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:27.639526 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:55:27.640109 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:55:27.640754 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:55:27.640791 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:55:27.641362 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:55:27.641392 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:55:27.641923 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:55:27.641963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:55:27.642548 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:55:27.642583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:55:27.643228 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:55:27.643777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:55:27.645818 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:55:27.645902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:55:27.646961 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:55:27.647051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:55:27.650027 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:55:27.650879 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:55:27.650931 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:55:27.651585 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:55:27.651625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:27.653504 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:55:27.654402 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:55:27.654504 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:55:27.656031 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:55:27.656345 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:55:27.656777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:55:27.656806 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:27.658244 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:55:27.658581 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:55:27.658622 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:55:27.660215 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:55:27.660256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:27.662160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:55:27.662200 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:27.663242 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:27.664328 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:55:27.671387 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:55:27.671539 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:27.673841 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:55:27.673899 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:27.675234 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:55:27.675265 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:27.676566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:55:27.676607 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:55:27.677678 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:55:27.677714 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:55:27.678702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:55:27.678741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:55:27.683294 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:55:27.683674 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:55:27.683716 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:27.684500 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:55:27.684535 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:27.685181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:27.685225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:27.686952 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:55:27.687049 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:55:27.699479 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:55:27.699576 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:55:27.700419 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:55:27.701505 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:55:27.715332 systemd[1]: Switching root. Apr 16 23:55:27.745750 systemd-journald[197]: Journal stopped Apr 16 23:55:28.801513 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Apr 16 23:55:28.802147 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:55:28.802165 kernel: SELinux: policy capability open_perms=1 Apr 16 23:55:28.802173 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:55:28.802183 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:55:28.802191 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:55:28.802199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:55:28.802216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:55:28.802224 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:55:28.802232 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:55:28.802241 kernel: audit: type=1403 audit(1776383727.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:55:28.802253 systemd[1]: Successfully loaded SELinux policy in 58.755ms. Apr 16 23:55:28.802277 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.653ms. Apr 16 23:55:28.802286 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:55:28.802296 systemd[1]: Detected virtualization kvm. Apr 16 23:55:28.802304 systemd[1]: Detected architecture x86-64. Apr 16 23:55:28.802317 systemd[1]: Detected first boot. Apr 16 23:55:28.802325 systemd[1]: Hostname set to . Apr 16 23:55:28.802336 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:55:28.802348 zram_generator::config[1157]: No configuration found. Apr 16 23:55:28.802357 kernel: Guest personality initialized and is inactive Apr 16 23:55:28.802366 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 23:55:28.802374 kernel: Initialized host personality Apr 16 23:55:28.802382 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:55:28.802391 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:55:28.802400 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:55:28.802410 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:55:28.802418 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:55:28.802429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:55:28.802443 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:55:28.802452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:55:28.802462 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:55:28.802471 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:55:28.802480 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:55:28.802489 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:55:28.802500 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:55:28.802508 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:55:28.802517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:55:28.802526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:55:28.802535 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:55:28.802544 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:55:28.802555 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:55:28.802565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:55:28.802574 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:55:28.802583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:55:28.802591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:55:28.802601 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:55:28.802615 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:55:28.802624 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:55:28.802632 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:55:28.802643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:55:28.802652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:55:28.802661 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:55:28.802670 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:55:28.802679 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:55:28.802688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:55:28.802696 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:55:28.802705 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:55:28.802714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:55:28.802723 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:55:28.802734 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:55:28.802743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:55:28.802752 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:55:28.802761 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:55:28.802770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:28.802779 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:55:28.802788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:55:28.802797 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:55:28.802809 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:55:28.802817 systemd[1]: Reached target machines.target - Containers. Apr 16 23:55:28.802826 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:55:28.802835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:28.802844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:55:28.802852 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:55:28.802861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:55:28.802870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:55:28.802879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:55:28.802890 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:55:28.802899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:55:28.802908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:55:28.802917 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:55:28.802926 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:55:28.802934 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:55:28.802943 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:55:28.802952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:28.802963 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:55:28.802972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:55:28.802981 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:55:28.802990 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:55:28.802999 kernel: ACPI: bus type drm_connector registered Apr 16 23:55:28.803008 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:55:28.803019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:55:28.803028 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:55:28.803037 systemd[1]: Stopped verity-setup.service. Apr 16 23:55:28.803046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:28.803062 kernel: loop: module loaded Apr 16 23:55:28.803070 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:55:28.803099 systemd-journald[1234]: Collecting audit messages is disabled. Apr 16 23:55:28.803123 systemd-journald[1234]: Journal started Apr 16 23:55:28.803466 systemd-journald[1234]: Runtime Journal (/run/log/journal/105e646545204e29bed3b0222767b238) is 8M, max 76.1M, 68.1M free. Apr 16 23:55:28.469643 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:55:28.482917 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 16 23:55:28.483646 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:55:28.809148 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:55:28.813581 kernel: fuse: init (API version 7.41) Apr 16 23:55:28.812700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:55:28.813231 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:55:28.814275 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:55:28.814764 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:55:28.816282 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:55:28.816928 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:55:28.817604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:55:28.818347 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:55:28.818517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:55:28.819175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:55:28.819551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:55:28.820695 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:55:28.820895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:55:28.821502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:55:28.821774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:55:28.822609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:55:28.822826 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:55:28.823475 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:55:28.823626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:55:28.824745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:55:28.825596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:55:28.826361 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:55:28.826995 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:55:28.838973 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:55:28.842216 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:55:28.846214 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:55:28.846572 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:55:28.846594 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:55:28.847661 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:55:28.857250 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:55:28.859288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:28.862236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:55:28.866241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:55:28.866609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:55:28.868068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:55:28.868805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:55:28.870635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:55:28.876084 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:55:28.884010 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:55:28.891095 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:55:28.892297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:55:28.895430 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:55:28.902956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:55:28.907773 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:55:28.911175 systemd-journald[1234]: Time spent on flushing to /var/log/journal/105e646545204e29bed3b0222767b238 is 52.862ms for 1245 entries. Apr 16 23:55:28.911175 systemd-journald[1234]: System Journal (/var/log/journal/105e646545204e29bed3b0222767b238) is 8M, max 584.8M, 576.8M free. Apr 16 23:55:28.997691 systemd-journald[1234]: Received client request to flush runtime journal. Apr 16 23:55:28.997746 kernel: loop0: detected capacity change from 0 to 8 Apr 16 23:55:28.997776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:55:28.954578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:55:28.960591 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:55:28.999645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:55:29.000822 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:55:29.002615 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:55:29.009108 kernel: loop1: detected capacity change from 0 to 110984 Apr 16 23:55:29.009411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:55:29.054153 kernel: loop2: detected capacity change from 0 to 228704 Apr 16 23:55:29.052817 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Apr 16 23:55:29.052829 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Apr 16 23:55:29.062228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:55:29.099159 kernel: loop3: detected capacity change from 0 to 128560 Apr 16 23:55:29.141163 kernel: loop4: detected capacity change from 0 to 8 Apr 16 23:55:29.148158 kernel: loop5: detected capacity change from 0 to 110984 Apr 16 23:55:29.172173 kernel: loop6: detected capacity change from 0 to 228704 Apr 16 23:55:29.197153 kernel: loop7: detected capacity change from 0 to 128560 Apr 16 23:55:29.214070 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 16 23:55:29.215672 (sd-merge)[1308]: Merged extensions into '/usr'. Apr 16 23:55:29.222993 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:55:29.223011 systemd[1]: Reloading... Apr 16 23:55:29.322220 zram_generator::config[1334]: No configuration found. Apr 16 23:55:29.348748 ldconfig[1277]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:55:29.482776 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:55:29.482924 systemd[1]: Reloading finished in 259 ms. Apr 16 23:55:29.498756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:55:29.499482 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:55:29.502724 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:55:29.512956 systemd[1]: Starting ensure-sysext.service... Apr 16 23:55:29.517311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:55:29.524245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:55:29.537974 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:55:29.538036 systemd[1]: Reloading... Apr 16 23:55:29.560848 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:55:29.560878 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:55:29.561106 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:55:29.563756 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:55:29.564678 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:55:29.566157 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Apr 16 23:55:29.566303 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Apr 16 23:55:29.573515 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:55:29.573527 systemd-tmpfiles[1379]: Skipping /boot Apr 16 23:55:29.591680 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:55:29.593454 systemd-udevd[1380]: Using default interface naming scheme 'v255'. Apr 16 23:55:29.593669 systemd-tmpfiles[1379]: Skipping /boot Apr 16 23:55:29.613165 zram_generator::config[1407]: No configuration found. Apr 16 23:55:29.811096 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:55:29.811432 systemd[1]: Reloading finished in 273 ms. Apr 16 23:55:29.821149 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 23:55:29.819988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:55:29.822534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:55:29.838153 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 16 23:55:29.860819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.865615 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:55:29.867699 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:55:29.868121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:29.870410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:55:29.876356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:55:29.878191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:55:29.878627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:29.878697 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:29.881352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:55:29.887354 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:55:29.891360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:55:29.895350 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:55:29.895670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.896945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:55:29.897109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:55:29.904786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.904951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:29.913189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:55:29.923283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:55:29.923766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:29.924269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:29.924592 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.925675 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:55:29.926756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:55:29.938688 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 23:55:29.938917 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 23:55:29.939076 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 23:55:29.938427 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 16 23:55:29.941779 systemd[1]: Finished ensure-sysext.service. Apr 16 23:55:29.945290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.945397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:55:29.947495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:55:29.947876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:55:29.947902 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:55:29.952109 kernel: ACPI: button: Power Button [PWRF] Apr 16 23:55:29.950385 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 23:55:29.952818 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:55:29.954211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 23:55:29.958258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:55:29.958438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:55:29.959819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:55:29.967998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:55:29.994980 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:55:29.996572 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:55:29.998081 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:55:29.998382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:55:30.006520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:55:30.007112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:55:30.013350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:55:30.017438 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:55:30.029216 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:55:30.030341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:55:30.031198 augenrules[1539]: No rules Apr 16 23:55:30.031795 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:55:30.032639 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:55:30.034661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:55:30.050519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:55:30.065568 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 16 23:55:30.065628 kernel: Console: switching to colour dummy device 80x25 Apr 16 23:55:30.067155 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 16 23:55:30.068464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:55:30.076417 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 16 23:55:30.076458 kernel: [drm] features: -context_init Apr 16 23:55:30.113164 kernel: EDAC MC: Ver: 3.0.0 Apr 16 23:55:30.120153 kernel: [drm] number of scanouts: 1 Apr 16 23:55:30.121173 kernel: [drm] number of cap sets: 0 Apr 16 23:55:30.135167 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 16 23:55:30.149677 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 16 23:55:30.149754 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 23:55:30.167600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 16 23:55:30.172778 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 16 23:55:30.182247 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:55:30.187186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:30.214674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:55:30.214959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:30.218302 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:55:30.220079 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:55:30.323513 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 23:55:30.324852 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:55:30.327209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:55:30.351245 systemd-networkd[1501]: lo: Link UP Apr 16 23:55:30.351596 systemd-networkd[1501]: lo: Gained carrier Apr 16 23:55:30.358450 systemd-networkd[1501]: Enumeration completed Apr 16 23:55:30.358540 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:55:30.358972 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:30.358976 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:30.362299 systemd-networkd[1501]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:30.362311 systemd-networkd[1501]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:55:30.362501 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:55:30.362771 systemd-networkd[1501]: eth0: Link UP Apr 16 23:55:30.362942 systemd-networkd[1501]: eth0: Gained carrier Apr 16 23:55:30.362953 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:30.364608 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:55:30.367341 systemd-resolved[1505]: Positive Trust Anchors: Apr 16 23:55:30.367351 systemd-resolved[1505]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:55:30.367375 systemd-resolved[1505]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:55:30.372828 systemd-resolved[1505]: Using system hostname 'ci-4459-2-4-n-391826f4f6'. Apr 16 23:55:30.373357 systemd-networkd[1501]: eth1: Link UP Apr 16 23:55:30.373902 systemd-networkd[1501]: eth1: Gained carrier Apr 16 23:55:30.373929 systemd-networkd[1501]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:55:30.377643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:55:30.379282 systemd[1]: Reached target network.target - Network. Apr 16 23:55:30.379951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:55:30.381513 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:55:30.381670 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:55:30.381768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:55:30.381857 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 23:55:30.382081 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:55:30.382259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:55:30.382329 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:55:30.382400 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:55:30.382418 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:55:30.384461 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:55:30.389625 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:55:30.392225 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:55:30.395793 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:55:30.398971 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:55:30.399787 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:55:30.402730 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:55:30.407549 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:55:30.410188 systemd-networkd[1501]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 16 23:55:30.410332 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:55:30.411362 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:30.412765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:55:30.415673 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:55:30.418251 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:55:30.418804 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:55:30.418836 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:55:30.419911 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:55:30.424259 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:55:30.428257 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:55:30.430613 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:55:30.433247 systemd-networkd[1501]: eth0: DHCPv4 address 77.42.25.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 16 23:55:30.434326 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:30.438290 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:55:30.441238 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:55:30.444182 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:55:30.444995 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 23:55:30.449346 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:55:30.456416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:55:30.459560 jq[1597]: false Apr 16 23:55:30.461422 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 16 23:55:30.466965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:55:30.475273 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:55:30.485092 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:55:30.488923 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:55:30.490410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:55:30.491595 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:55:30.494174 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing passwd entry cache Apr 16 23:55:30.494070 oslogin_cache_refresh[1599]: Refreshing passwd entry cache Apr 16 23:55:30.495627 extend-filesystems[1598]: Found /dev/sda6 Apr 16 23:55:30.499302 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:55:30.502638 coreos-metadata[1594]: Apr 16 23:55:30.502 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 16 23:55:30.504814 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting users, quitting Apr 16 23:55:30.504814 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:55:30.504814 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing group entry cache Apr 16 23:55:30.504490 oslogin_cache_refresh[1599]: Failure getting users, quitting Apr 16 23:55:30.504505 oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 23:55:30.504552 oslogin_cache_refresh[1599]: Refreshing group entry cache Apr 16 23:55:30.506352 coreos-metadata[1594]: Apr 16 23:55:30.505 INFO Fetch successful Apr 16 23:55:30.506352 coreos-metadata[1594]: Apr 16 23:55:30.505 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 16 23:55:30.506407 extend-filesystems[1598]: Found /dev/sda9 Apr 16 23:55:30.505284 oslogin_cache_refresh[1599]: Failure getting groups, quitting Apr 16 23:55:30.512243 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting groups, quitting Apr 16 23:55:30.512243 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:55:30.512275 coreos-metadata[1594]: Apr 16 23:55:30.509 INFO Fetch successful Apr 16 23:55:30.507900 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:55:30.505293 oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 23:55:30.512513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:55:30.514720 extend-filesystems[1598]: Checking size of /dev/sda9 Apr 16 23:55:30.514984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:55:30.516427 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 23:55:30.516623 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 23:55:30.524096 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:55:30.525349 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:55:30.531184 jq[1619]: true Apr 16 23:55:30.531476 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:55:30.531683 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:55:30.547509 jq[1631]: true Apr 16 23:55:30.563324 extend-filesystems[1598]: Resized partition /dev/sda9 Apr 16 23:55:30.588684 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 16 23:55:30.589635 extend-filesystems[1648]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:55:30.595905 update_engine[1618]: I20260416 23:55:30.580838 1618 main.cc:92] Flatcar Update Engine starting Apr 16 23:55:30.575947 (ntainerd)[1632]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:55:30.614910 systemd-logind[1616]: New seat seat0. Apr 16 23:55:30.626772 tar[1629]: linux-amd64/LICENSE Apr 16 23:55:30.627884 tar[1629]: linux-amd64/helm Apr 16 23:55:30.628610 dbus-daemon[1595]: [system] SELinux support is enabled Apr 16 23:55:30.628738 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:55:30.645173 update_engine[1618]: I20260416 23:55:30.645017 1618 update_check_scheduler.cc:74] Next update check in 11m40s Apr 16 23:55:30.658070 systemd-logind[1616]: Watching system buttons on /dev/input/event3 (Power Button) Apr 16 23:55:30.658099 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 23:55:30.661529 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:55:30.661593 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:55:30.666019 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:55:30.666050 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:55:30.670799 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:55:30.675852 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:55:30.694240 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:55:30.703163 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:55:30.705997 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:55:30.744175 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:55:30.745046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:55:30.754437 systemd[1]: Starting sshkeys.service... Apr 16 23:55:30.786378 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:55:30.792780 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:55:30.846206 containerd[1632]: time="2026-04-16T23:55:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:55:30.850403 containerd[1632]: time="2026-04-16T23:55:30.850359708Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:55:30.863164 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 16 23:55:30.867473 containerd[1632]: time="2026-04-16T23:55:30.867430658Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.312µs" Apr 16 23:55:30.867473 containerd[1632]: time="2026-04-16T23:55:30.867465901Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:55:30.867522 containerd[1632]: time="2026-04-16T23:55:30.867484599Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:55:30.876153 containerd[1632]: time="2026-04-16T23:55:30.875405628Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:55:30.876153 containerd[1632]: time="2026-04-16T23:55:30.875614731Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:55:30.876153 containerd[1632]: time="2026-04-16T23:55:30.875654551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879142 containerd[1632]: time="2026-04-16T23:55:30.877749706Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879142 containerd[1632]: time="2026-04-16T23:55:30.877776416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879142 containerd[1632]: time="2026-04-16T23:55:30.878331359Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879142 containerd[1632]: time="2026-04-16T23:55:30.878350297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879142 containerd[1632]: time="2026-04-16T23:55:30.878363497Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:55:30.879242 extend-filesystems[1648]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 16 23:55:30.879242 extend-filesystems[1648]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 16 23:55:30.879242 extend-filesystems[1648]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 16 23:55:30.895343 extend-filesystems[1598]: Resized filesystem in /dev/sda9 Apr 16 23:55:30.896939 coreos-metadata[1680]: Apr 16 23:55:30.889 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 16 23:55:30.896939 coreos-metadata[1680]: Apr 16 23:55:30.893 INFO Fetch successful Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.879699742Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.880706333Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.881445482Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.881510991Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.881525182Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.882496249Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.883239675Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.883344492Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888282203Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888583996Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888607832Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888623295Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888638237Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:55:30.901223 containerd[1632]: time="2026-04-16T23:55:30.888650165Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:55:30.880643 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888663145Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888676575Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888690596Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888702985Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888903856Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.888926049Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.889057807Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.889081763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.889098548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.889110977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.889122765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.890355445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.890402335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.890415645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:55:30.903160 containerd[1632]: time="2026-04-16T23:55:30.890425660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:55:30.881239 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:55:30.903436 containerd[1632]: time="2026-04-16T23:55:30.890436056Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:55:30.903436 containerd[1632]: time="2026-04-16T23:55:30.890449586Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:55:30.903436 containerd[1632]: time="2026-04-16T23:55:30.890500432Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:55:30.903436 containerd[1632]: time="2026-04-16T23:55:30.890515725Z" level=info msg="Start snapshots syncer" Apr 16 23:55:30.903436 containerd[1632]: time="2026-04-16T23:55:30.891214614Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:55:30.896750 unknown[1680]: wrote ssh authorized keys file for user: core Apr 16 23:55:30.903634 containerd[1632]: time="2026-04-16T23:55:30.892437379Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:55:30.903634 containerd[1632]: time="2026-04-16T23:55:30.893378582Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.896219385Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.896348670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.896374218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897080408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897102581Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897124043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897265706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897280819Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897308390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897322181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.897334559Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.898173739Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.898197474Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:55:30.903726 containerd[1632]: time="2026-04-16T23:55:30.898648181Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898668231Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898679968Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898694901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898716824Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898736133Z" level=info msg="runtime interface created" Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898744285Z" level=info msg="created NRI interface" Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898753359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898766939Z" level=info msg="Connect containerd service" Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.898790284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:55:30.903909 containerd[1632]: time="2026-04-16T23:55:30.903151029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:55:30.956307 update-ssh-keys[1690]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:55:30.956857 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:55:30.964706 systemd[1]: Finished sshkeys.service. Apr 16 23:55:31.017436 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:55:31.028451 containerd[1632]: time="2026-04-16T23:55:31.028405635Z" level=info msg="Start subscribing containerd event" Apr 16 23:55:31.028555 containerd[1632]: time="2026-04-16T23:55:31.028522771Z" level=info msg="Start recovering state" Apr 16 23:55:31.028782 containerd[1632]: time="2026-04-16T23:55:31.028765745Z" level=info msg="Start event monitor" Apr 16 23:55:31.028823 containerd[1632]: time="2026-04-16T23:55:31.028783622Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:55:31.028879 containerd[1632]: time="2026-04-16T23:55:31.028851033Z" level=info msg="Start streaming server" Apr 16 23:55:31.028897 containerd[1632]: time="2026-04-16T23:55:31.028882060Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:55:31.028897 containerd[1632]: time="2026-04-16T23:55:31.028890192Z" level=info msg="runtime interface starting up..." Apr 16 23:55:31.028897 containerd[1632]: time="2026-04-16T23:55:31.028896712Z" level=info msg="starting plugins..." Apr 16 23:55:31.029074 containerd[1632]: time="2026-04-16T23:55:31.029056822Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:55:31.029723 containerd[1632]: time="2026-04-16T23:55:31.029691905Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:55:31.030001 containerd[1632]: time="2026-04-16T23:55:31.029979457Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:55:31.030622 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:55:31.034250 containerd[1632]: time="2026-04-16T23:55:31.034226371Z" level=info msg="containerd successfully booted in 0.190574s" Apr 16 23:55:31.091413 sshd_keygen[1628]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:55:31.110618 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:55:31.117317 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:55:31.136787 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:55:31.137087 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:55:31.142293 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:55:31.146221 tar[1629]: linux-amd64/README.md Apr 16 23:55:31.165336 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:55:31.169409 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:55:31.174017 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:55:31.175814 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:55:31.180058 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:55:31.630429 systemd-networkd[1501]: eth1: Gained IPv6LL Apr 16 23:55:31.631542 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:31.635969 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:55:31.639663 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:55:31.645845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:31.651301 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:55:31.703628 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:55:31.822599 systemd-networkd[1501]: eth0: Gained IPv6LL Apr 16 23:55:31.824296 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:32.643457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:32.646731 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:55:32.648775 systemd[1]: Startup finished in 2.962s (kernel) + 8.214s (initrd) + 4.824s (userspace) = 16.001s. Apr 16 23:55:32.655488 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:55:33.300445 kubelet[1746]: E0416 23:55:33.300342 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:55:33.306356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:55:33.306553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:55:33.306914 systemd[1]: kubelet.service: Consumed 952ms CPU time, 266.5M memory peak. Apr 16 23:55:33.408498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:55:33.410717 systemd[1]: Started sshd@0-77.42.25.117:22-4.175.71.9:42048.service - OpenSSH per-connection server daemon (4.175.71.9:42048). Apr 16 23:55:33.651746 sshd[1758]: Accepted publickey for core from 4.175.71.9 port 42048 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:33.655562 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:33.668709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:55:33.676749 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:55:33.703043 systemd-logind[1616]: New session 1 of user core. Apr 16 23:55:33.729569 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:55:33.734686 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:55:33.746537 (systemd)[1763]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:55:33.749407 systemd-logind[1616]: New session c1 of user core. Apr 16 23:55:33.862234 systemd[1763]: Queued start job for default target default.target. Apr 16 23:55:33.869065 systemd[1763]: Created slice app.slice - User Application Slice. Apr 16 23:55:33.869204 systemd[1763]: Reached target paths.target - Paths. Apr 16 23:55:33.869346 systemd[1763]: Reached target timers.target - Timers. Apr 16 23:55:33.870671 systemd[1763]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:55:33.889529 systemd[1763]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:55:33.889703 systemd[1763]: Reached target sockets.target - Sockets. Apr 16 23:55:33.889774 systemd[1763]: Reached target basic.target - Basic System. Apr 16 23:55:33.889929 systemd[1763]: Reached target default.target - Main User Target. Apr 16 23:55:33.890020 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:55:33.890120 systemd[1763]: Startup finished in 135ms. Apr 16 23:55:33.899247 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:55:34.009278 systemd[1]: Started sshd@1-77.42.25.117:22-4.175.71.9:42064.service - OpenSSH per-connection server daemon (4.175.71.9:42064). Apr 16 23:55:34.222312 sshd[1774]: Accepted publickey for core from 4.175.71.9 port 42064 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:34.225360 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:34.235240 systemd-logind[1616]: New session 2 of user core. Apr 16 23:55:34.251473 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:55:34.332716 sshd[1777]: Connection closed by 4.175.71.9 port 42064 Apr 16 23:55:34.333481 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:34.339694 systemd[1]: sshd@1-77.42.25.117:22-4.175.71.9:42064.service: Deactivated successfully. Apr 16 23:55:34.343586 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:55:34.347063 systemd-logind[1616]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:55:34.349440 systemd-logind[1616]: Removed session 2. Apr 16 23:55:34.381832 systemd[1]: Started sshd@2-77.42.25.117:22-4.175.71.9:42074.service - OpenSSH per-connection server daemon (4.175.71.9:42074). Apr 16 23:55:34.599589 sshd[1783]: Accepted publickey for core from 4.175.71.9 port 42074 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:34.602424 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:34.612068 systemd-logind[1616]: New session 3 of user core. Apr 16 23:55:34.618351 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:55:34.692917 sshd[1786]: Connection closed by 4.175.71.9 port 42074 Apr 16 23:55:34.695435 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:34.702663 systemd-logind[1616]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:55:34.704487 systemd[1]: sshd@2-77.42.25.117:22-4.175.71.9:42074.service: Deactivated successfully. Apr 16 23:55:34.708264 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:55:34.710994 systemd-logind[1616]: Removed session 3. Apr 16 23:55:34.736563 systemd[1]: Started sshd@3-77.42.25.117:22-4.175.71.9:42084.service - OpenSSH per-connection server daemon (4.175.71.9:42084). Apr 16 23:55:34.954320 sshd[1792]: Accepted publickey for core from 4.175.71.9 port 42084 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:34.956320 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:34.964750 systemd-logind[1616]: New session 4 of user core. Apr 16 23:55:34.973357 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:55:35.054522 sshd[1795]: Connection closed by 4.175.71.9 port 42084 Apr 16 23:55:35.056437 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:35.064094 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:55:35.065592 systemd[1]: sshd@3-77.42.25.117:22-4.175.71.9:42084.service: Deactivated successfully. Apr 16 23:55:35.069082 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:55:35.071994 systemd-logind[1616]: Removed session 4. Apr 16 23:55:35.104604 systemd[1]: Started sshd@4-77.42.25.117:22-4.175.71.9:42100.service - OpenSSH per-connection server daemon (4.175.71.9:42100). Apr 16 23:55:35.301264 sshd[1801]: Accepted publickey for core from 4.175.71.9 port 42100 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:35.302468 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:35.307737 systemd-logind[1616]: New session 5 of user core. Apr 16 23:55:35.319342 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:55:35.383789 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:55:35.384450 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:55:35.404795 sudo[1805]: pam_unix(sudo:session): session closed for user root Apr 16 23:55:35.435850 sshd[1804]: Connection closed by 4.175.71.9 port 42100 Apr 16 23:55:35.437554 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:35.442464 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:55:35.442955 systemd[1]: sshd@4-77.42.25.117:22-4.175.71.9:42100.service: Deactivated successfully. Apr 16 23:55:35.445392 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:55:35.447416 systemd-logind[1616]: Removed session 5. Apr 16 23:55:35.479310 systemd[1]: Started sshd@5-77.42.25.117:22-4.175.71.9:37282.service - OpenSSH per-connection server daemon (4.175.71.9:37282). Apr 16 23:55:35.672850 sshd[1811]: Accepted publickey for core from 4.175.71.9 port 37282 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:35.674698 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:35.680737 systemd-logind[1616]: New session 6 of user core. Apr 16 23:55:35.688378 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:55:35.746024 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:55:35.747669 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:55:35.756002 sudo[1816]: pam_unix(sudo:session): session closed for user root Apr 16 23:55:35.769392 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:55:35.770058 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:55:35.788726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:55:35.847759 augenrules[1838]: No rules Apr 16 23:55:35.849842 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:55:35.850454 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:55:35.852365 sudo[1815]: pam_unix(sudo:session): session closed for user root Apr 16 23:55:35.883862 sshd[1814]: Connection closed by 4.175.71.9 port 37282 Apr 16 23:55:35.885441 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Apr 16 23:55:35.891729 systemd[1]: sshd@5-77.42.25.117:22-4.175.71.9:37282.service: Deactivated successfully. Apr 16 23:55:35.894888 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:55:35.898761 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:55:35.901253 systemd-logind[1616]: Removed session 6. Apr 16 23:55:35.928628 systemd[1]: Started sshd@6-77.42.25.117:22-4.175.71.9:37298.service - OpenSSH per-connection server daemon (4.175.71.9:37298). Apr 16 23:55:36.128299 sshd[1847]: Accepted publickey for core from 4.175.71.9 port 37298 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:55:36.129945 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:55:36.137941 systemd-logind[1616]: New session 7 of user core. Apr 16 23:55:36.149313 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:55:36.202992 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:55:36.203907 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:55:36.543639 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:55:36.560834 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:55:36.783158 dockerd[1868]: time="2026-04-16T23:55:36.783083377Z" level=info msg="Starting up" Apr 16 23:55:36.783754 dockerd[1868]: time="2026-04-16T23:55:36.783718280Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:55:36.797877 dockerd[1868]: time="2026-04-16T23:55:36.797776689Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:55:36.817501 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1340532613-merged.mount: Deactivated successfully. Apr 16 23:55:36.897154 dockerd[1868]: time="2026-04-16T23:55:36.897101199Z" level=info msg="Loading containers: start." Apr 16 23:55:36.908147 kernel: Initializing XFRM netlink socket Apr 16 23:55:37.125795 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:37.133081 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:37.172178 systemd-networkd[1501]: docker0: Link UP Apr 16 23:55:37.172420 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Apr 16 23:55:37.175147 dockerd[1868]: time="2026-04-16T23:55:37.175102252Z" level=info msg="Loading containers: done." Apr 16 23:55:37.188398 dockerd[1868]: time="2026-04-16T23:55:37.188352538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:55:37.188531 dockerd[1868]: time="2026-04-16T23:55:37.188422033Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:55:37.188531 dockerd[1868]: time="2026-04-16T23:55:37.188490786Z" level=info msg="Initializing buildkit" Apr 16 23:55:37.206564 dockerd[1868]: time="2026-04-16T23:55:37.206530010Z" level=info msg="Completed buildkit initialization" Apr 16 23:55:37.211585 dockerd[1868]: time="2026-04-16T23:55:37.211554841Z" level=info msg="Daemon has completed initialization" Apr 16 23:55:37.211865 dockerd[1868]: time="2026-04-16T23:55:37.211830134Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:55:37.212249 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:55:37.671481 containerd[1632]: time="2026-04-16T23:55:37.671411930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 23:55:38.261417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974407180.mount: Deactivated successfully. Apr 16 23:55:39.626294 containerd[1632]: time="2026-04-16T23:55:39.626239871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:39.627628 containerd[1632]: time="2026-04-16T23:55:39.627462536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30194089" Apr 16 23:55:39.628716 containerd[1632]: time="2026-04-16T23:55:39.628693223Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:39.630945 containerd[1632]: time="2026-04-16T23:55:39.630925123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:39.631590 containerd[1632]: time="2026-04-16T23:55:39.631571292Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.960109508s" Apr 16 23:55:39.631650 containerd[1632]: time="2026-04-16T23:55:39.631640266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 23:55:39.632057 containerd[1632]: time="2026-04-16T23:55:39.632007958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 23:55:41.373303 containerd[1632]: time="2026-04-16T23:55:41.373258454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:41.374398 containerd[1632]: time="2026-04-16T23:55:41.374305826Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171469" Apr 16 23:55:41.375675 containerd[1632]: time="2026-04-16T23:55:41.375652167Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:41.378003 containerd[1632]: time="2026-04-16T23:55:41.377986140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:41.378739 containerd[1632]: time="2026-04-16T23:55:41.378620412Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.746594918s" Apr 16 23:55:41.378739 containerd[1632]: time="2026-04-16T23:55:41.378642435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 23:55:41.379242 containerd[1632]: time="2026-04-16T23:55:41.379205199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 23:55:42.456010 containerd[1632]: time="2026-04-16T23:55:42.455947370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:42.457327 containerd[1632]: time="2026-04-16T23:55:42.457154141Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289778" Apr 16 23:55:42.458388 containerd[1632]: time="2026-04-16T23:55:42.458364758Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:42.460819 containerd[1632]: time="2026-04-16T23:55:42.460794194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:42.462226 containerd[1632]: time="2026-04-16T23:55:42.462196509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.0829382s" Apr 16 23:55:42.462274 containerd[1632]: time="2026-04-16T23:55:42.462228817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 23:55:42.462878 containerd[1632]: time="2026-04-16T23:55:42.462842008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 23:55:43.416752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673007651.mount: Deactivated successfully. Apr 16 23:55:43.418157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:55:43.422262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:43.569401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:43.575441 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:55:43.608447 kubelet[2159]: E0416 23:55:43.608402 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:55:43.613329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:55:43.613485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:55:43.613802 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.1M memory peak. Apr 16 23:55:43.757298 containerd[1632]: time="2026-04-16T23:55:43.757198120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:43.758409 containerd[1632]: time="2026-04-16T23:55:43.758324321Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010739" Apr 16 23:55:43.759180 containerd[1632]: time="2026-04-16T23:55:43.759152984Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:43.760788 containerd[1632]: time="2026-04-16T23:55:43.760769350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:43.761197 containerd[1632]: time="2026-04-16T23:55:43.761178284Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.298313111s" Apr 16 23:55:43.761253 containerd[1632]: time="2026-04-16T23:55:43.761243952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 23:55:43.761693 containerd[1632]: time="2026-04-16T23:55:43.761673257Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 23:55:44.272956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2238605664.mount: Deactivated successfully. Apr 16 23:55:44.984191 containerd[1632]: time="2026-04-16T23:55:44.984136440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:44.985328 containerd[1632]: time="2026-04-16T23:55:44.985149711Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 16 23:55:44.986212 containerd[1632]: time="2026-04-16T23:55:44.986189752Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:44.988179 containerd[1632]: time="2026-04-16T23:55:44.988151977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:44.988733 containerd[1632]: time="2026-04-16T23:55:44.988712098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.226949577s" Apr 16 23:55:44.988765 containerd[1632]: time="2026-04-16T23:55:44.988735853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 23:55:44.989221 containerd[1632]: time="2026-04-16T23:55:44.989099659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 23:55:45.474066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904616788.mount: Deactivated successfully. Apr 16 23:55:45.483274 containerd[1632]: time="2026-04-16T23:55:45.483038529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:55:45.484929 containerd[1632]: time="2026-04-16T23:55:45.484840914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 16 23:55:45.486184 containerd[1632]: time="2026-04-16T23:55:45.486071421Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:55:45.490429 containerd[1632]: time="2026-04-16T23:55:45.490327128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:55:45.491292 containerd[1632]: time="2026-04-16T23:55:45.491238716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.060048ms" Apr 16 23:55:45.491292 containerd[1632]: time="2026-04-16T23:55:45.491283604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 23:55:45.492660 containerd[1632]: time="2026-04-16T23:55:45.492581311Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 23:55:46.059108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816839518.mount: Deactivated successfully. Apr 16 23:55:46.591226 systemd[1]: Started sshd@7-77.42.25.117:22-184.181.217.198:40182.service - OpenSSH per-connection server daemon (184.181.217.198:40182). Apr 16 23:55:46.899668 containerd[1632]: time="2026-04-16T23:55:46.899450718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:46.900730 containerd[1632]: time="2026-04-16T23:55:46.900704850Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719532" Apr 16 23:55:46.901717 containerd[1632]: time="2026-04-16T23:55:46.901682117Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:46.903928 containerd[1632]: time="2026-04-16T23:55:46.903717672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:55:46.904718 containerd[1632]: time="2026-04-16T23:55:46.904359835Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.411464443s" Apr 16 23:55:46.904718 containerd[1632]: time="2026-04-16T23:55:46.904384172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 23:55:49.333917 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:49.334256 systemd[1]: kubelet.service: Consumed 142ms CPU time, 110.1M memory peak. Apr 16 23:55:49.336019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:49.358057 systemd[1]: Reload requested from client PID 2316 ('systemctl') (unit session-7.scope)... Apr 16 23:55:49.358076 systemd[1]: Reloading... Apr 16 23:55:49.471164 zram_generator::config[2362]: No configuration found. Apr 16 23:55:49.660376 systemd[1]: Reloading finished in 301 ms. Apr 16 23:55:49.700289 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:55:49.700407 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:55:49.700693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:49.700747 systemd[1]: kubelet.service: Consumed 109ms CPU time, 98.3M memory peak. Apr 16 23:55:49.702349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:49.899658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:49.911813 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:55:49.958301 kubelet[2414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:55:49.958301 kubelet[2414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:55:49.958301 kubelet[2414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:55:49.958691 kubelet[2414]: I0416 23:55:49.958395 2414 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:55:50.338437 kubelet[2414]: I0416 23:55:50.338296 2414 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:55:50.338437 kubelet[2414]: I0416 23:55:50.338319 2414 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:55:50.338624 kubelet[2414]: I0416 23:55:50.338481 2414 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:55:50.365412 kubelet[2414]: E0416 23:55:50.365335 2414 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://77.42.25.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 77.42.25.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:55:50.367985 kubelet[2414]: I0416 23:55:50.367935 2414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:55:50.378447 kubelet[2414]: I0416 23:55:50.378405 2414 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:55:50.385873 kubelet[2414]: I0416 23:55:50.385840 2414 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:55:50.387403 kubelet[2414]: I0416 23:55:50.387345 2414 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:55:50.387652 kubelet[2414]: I0416 23:55:50.387397 2414 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-391826f4f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:55:50.387744 kubelet[2414]: I0416 23:55:50.387660 2414 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:55:50.387744 kubelet[2414]: I0416 23:55:50.387678 2414 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:55:50.387917 kubelet[2414]: I0416 23:55:50.387893 2414 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:55:50.397729 kubelet[2414]: I0416 23:55:50.397689 2414 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:55:50.397787 kubelet[2414]: I0416 23:55:50.397742 2414 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:55:50.397808 kubelet[2414]: I0416 23:55:50.397797 2414 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:55:50.400753 kubelet[2414]: I0416 23:55:50.400463 2414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:55:50.401546 kubelet[2414]: E0416 23:55:50.401523 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://77.42.25.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-391826f4f6&limit=500&resourceVersion=0\": dial tcp 77.42.25.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:55:50.405094 kubelet[2414]: I0416 23:55:50.405064 2414 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:55:50.405933 kubelet[2414]: I0416 23:55:50.405894 2414 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:55:50.407091 kubelet[2414]: W0416 23:55:50.407005 2414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:55:50.408261 kubelet[2414]: E0416 23:55:50.408213 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://77.42.25.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 77.42.25.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:55:50.412881 kubelet[2414]: I0416 23:55:50.412850 2414 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:55:50.413285 kubelet[2414]: I0416 23:55:50.412924 2414 server.go:1289] "Started kubelet" Apr 16 23:55:50.414957 kubelet[2414]: I0416 23:55:50.414890 2414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:55:50.416220 kubelet[2414]: E0416 23:55:50.415188 2414 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://77.42.25.117:6443/api/v1/namespaces/default/events\": dial tcp 77.42.25.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-391826f4f6.18a6fb9bfc22b84d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-391826f4f6,UID:ci-4459-2-4-n-391826f4f6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-391826f4f6,},FirstTimestamp:2026-04-16 23:55:50.412875853 +0000 UTC m=+0.494521205,LastTimestamp:2026-04-16 23:55:50.412875853 +0000 UTC m=+0.494521205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-391826f4f6,}" Apr 16 23:55:50.417325 kubelet[2414]: E0416 23:55:50.417314 2414 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:55:50.417474 kubelet[2414]: I0416 23:55:50.417440 2414 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:55:50.418085 kubelet[2414]: I0416 23:55:50.418074 2414 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:55:50.420811 kubelet[2414]: I0416 23:55:50.420777 2414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:55:50.421015 kubelet[2414]: I0416 23:55:50.421006 2414 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:55:50.421201 kubelet[2414]: I0416 23:55:50.421190 2414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:55:50.426237 kubelet[2414]: E0416 23:55:50.423324 2414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-391826f4f6\" not found" Apr 16 23:55:50.426237 kubelet[2414]: I0416 23:55:50.423360 2414 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:55:50.426237 kubelet[2414]: I0416 23:55:50.423464 2414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:55:50.426237 kubelet[2414]: I0416 23:55:50.423492 2414 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:55:50.426237 kubelet[2414]: E0416 23:55:50.423687 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://77.42.25.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 77.42.25.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:55:50.426795 kubelet[2414]: E0416 23:55:50.423958 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-391826f4f6?timeout=10s\": dial tcp 77.42.25.117:6443: connect: connection refused" interval="200ms" Apr 16 23:55:50.428203 kubelet[2414]: I0416 23:55:50.428190 2414 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:55:50.428304 kubelet[2414]: I0416 23:55:50.428293 2414 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:55:50.429362 kubelet[2414]: I0416 23:55:50.429353 2414 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:55:50.448757 kubelet[2414]: I0416 23:55:50.448718 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:55:50.449835 kubelet[2414]: I0416 23:55:50.449806 2414 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:55:50.449835 kubelet[2414]: I0416 23:55:50.449835 2414 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:55:50.449889 kubelet[2414]: I0416 23:55:50.449849 2414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:55:50.449889 kubelet[2414]: I0416 23:55:50.449855 2414 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:55:50.449889 kubelet[2414]: E0416 23:55:50.449885 2414 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:55:50.453058 kubelet[2414]: E0416 23:55:50.453039 2414 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://77.42.25.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 77.42.25.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:55:50.455216 kubelet[2414]: I0416 23:55:50.455084 2414 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:55:50.455216 kubelet[2414]: I0416 23:55:50.455095 2414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:55:50.455216 kubelet[2414]: I0416 23:55:50.455109 2414 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:55:50.457851 kubelet[2414]: I0416 23:55:50.457841 2414 policy_none.go:49] "None policy: Start" Apr 16 23:55:50.457907 kubelet[2414]: I0416 23:55:50.457900 2414 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:55:50.457947 kubelet[2414]: I0416 23:55:50.457941 2414 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:55:50.463452 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:55:50.475029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:55:50.477539 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:55:50.488062 kubelet[2414]: E0416 23:55:50.487949 2414 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:55:50.488942 kubelet[2414]: I0416 23:55:50.488843 2414 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:55:50.488942 kubelet[2414]: I0416 23:55:50.488861 2414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:55:50.489300 kubelet[2414]: I0416 23:55:50.489097 2414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:55:50.490197 kubelet[2414]: E0416 23:55:50.490185 2414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:55:50.490276 kubelet[2414]: E0416 23:55:50.490257 2414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-391826f4f6\" not found" Apr 16 23:55:50.570681 systemd[1]: Created slice kubepods-burstable-podfe9dd38790a2af58db92582025b293e3.slice - libcontainer container kubepods-burstable-podfe9dd38790a2af58db92582025b293e3.slice. Apr 16 23:55:50.583890 kubelet[2414]: E0416 23:55:50.583813 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.587298 systemd[1]: Created slice kubepods-burstable-pod3f718b5c0b3b97cc5628330308f14b5b.slice - libcontainer container kubepods-burstable-pod3f718b5c0b3b97cc5628330308f14b5b.slice. Apr 16 23:55:50.593018 kubelet[2414]: I0416 23:55:50.592251 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.593018 kubelet[2414]: E0416 23:55:50.592651 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.593018 kubelet[2414]: E0416 23:55:50.592928 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.25.117:6443/api/v1/nodes\": dial tcp 77.42.25.117:6443: connect: connection refused" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.596196 systemd[1]: Created slice kubepods-burstable-pod9e5e31f668dafac4a840b3ef7a0c76df.slice - libcontainer container kubepods-burstable-pod9e5e31f668dafac4a840b3ef7a0c76df.slice. Apr 16 23:55:50.599519 kubelet[2414]: E0416 23:55:50.599465 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625327 kubelet[2414]: I0416 23:55:50.625291 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625569 kubelet[2414]: I0416 23:55:50.625492 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625569 kubelet[2414]: I0416 23:55:50.625535 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625569 kubelet[2414]: I0416 23:55:50.625562 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625569 kubelet[2414]: I0416 23:55:50.625587 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e5e31f668dafac4a840b3ef7a0c76df-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-391826f4f6\" (UID: \"9e5e31f668dafac4a840b3ef7a0c76df\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625891 kubelet[2414]: I0416 23:55:50.625611 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625891 kubelet[2414]: I0416 23:55:50.625637 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625891 kubelet[2414]: I0416 23:55:50.625660 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.625891 kubelet[2414]: I0416 23:55:50.625694 2414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.627898 kubelet[2414]: E0416 23:55:50.627810 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-391826f4f6?timeout=10s\": dial tcp 77.42.25.117:6443: connect: connection refused" interval="400ms" Apr 16 23:55:50.796542 kubelet[2414]: I0416 23:55:50.796119 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.796722 kubelet[2414]: E0416 23:55:50.796591 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.25.117:6443/api/v1/nodes\": dial tcp 77.42.25.117:6443: connect: connection refused" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:50.887374 containerd[1632]: time="2026-04-16T23:55:50.887121236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-391826f4f6,Uid:fe9dd38790a2af58db92582025b293e3,Namespace:kube-system,Attempt:0,}" Apr 16 23:55:50.893908 containerd[1632]: time="2026-04-16T23:55:50.893847491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-391826f4f6,Uid:3f718b5c0b3b97cc5628330308f14b5b,Namespace:kube-system,Attempt:0,}" Apr 16 23:55:50.901277 containerd[1632]: time="2026-04-16T23:55:50.901117763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-391826f4f6,Uid:9e5e31f668dafac4a840b3ef7a0c76df,Namespace:kube-system,Attempt:0,}" Apr 16 23:55:50.920614 containerd[1632]: time="2026-04-16T23:55:50.920538670Z" level=info msg="connecting to shim 2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2" address="unix:///run/containerd/s/a9ed04f53bff6eae25ebede0307adc5718715be8978aa17522c1bb74d9a44c7d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:50.956996 containerd[1632]: time="2026-04-16T23:55:50.956880503Z" level=info msg="connecting to shim 7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479" address="unix:///run/containerd/s/171da9fa88d4461132ac1309df80f327112849dff8b917b6c4bde23956cf4ff3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:50.961596 containerd[1632]: time="2026-04-16T23:55:50.961234528Z" level=info msg="connecting to shim 3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0" address="unix:///run/containerd/s/245b7ded919aee42a70cec23f7350a28dd277cf782e86b0c5b335c012f39194e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:55:50.981379 systemd[1]: Started cri-containerd-2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2.scope - libcontainer container 2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2. Apr 16 23:55:50.997324 systemd[1]: Started cri-containerd-3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0.scope - libcontainer container 3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0. Apr 16 23:55:50.999228 systemd[1]: Started cri-containerd-7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479.scope - libcontainer container 7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479. Apr 16 23:55:51.028737 kubelet[2414]: E0416 23:55:51.028697 2414 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://77.42.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-391826f4f6?timeout=10s\": dial tcp 77.42.25.117:6443: connect: connection refused" interval="800ms" Apr 16 23:55:51.054549 containerd[1632]: time="2026-04-16T23:55:51.054502498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-391826f4f6,Uid:fe9dd38790a2af58db92582025b293e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2\"" Apr 16 23:55:51.060960 containerd[1632]: time="2026-04-16T23:55:51.060880340Z" level=info msg="CreateContainer within sandbox \"2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:55:51.071763 containerd[1632]: time="2026-04-16T23:55:51.071732817Z" level=info msg="Container bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:51.073343 containerd[1632]: time="2026-04-16T23:55:51.073262283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-391826f4f6,Uid:3f718b5c0b3b97cc5628330308f14b5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479\"" Apr 16 23:55:51.077479 containerd[1632]: time="2026-04-16T23:55:51.077430038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-391826f4f6,Uid:9e5e31f668dafac4a840b3ef7a0c76df,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0\"" Apr 16 23:55:51.079158 containerd[1632]: time="2026-04-16T23:55:51.078924461Z" level=info msg="CreateContainer within sandbox \"7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:55:51.079158 containerd[1632]: time="2026-04-16T23:55:51.079080315Z" level=info msg="CreateContainer within sandbox \"2febfeb0ae5bc4e885d0cce4ba50b7d0aec394cdaf594f68cd819a1632b700f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e\"" Apr 16 23:55:51.084437 containerd[1632]: time="2026-04-16T23:55:51.084402803Z" level=info msg="Container 63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:51.097018 containerd[1632]: time="2026-04-16T23:55:51.096967000Z" level=info msg="CreateContainer within sandbox \"3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:55:51.097501 containerd[1632]: time="2026-04-16T23:55:51.097407921Z" level=info msg="StartContainer for \"bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e\"" Apr 16 23:55:51.098412 containerd[1632]: time="2026-04-16T23:55:51.098383696Z" level=info msg="connecting to shim bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e" address="unix:///run/containerd/s/a9ed04f53bff6eae25ebede0307adc5718715be8978aa17522c1bb74d9a44c7d" protocol=ttrpc version=3 Apr 16 23:55:51.106320 containerd[1632]: time="2026-04-16T23:55:51.106275280Z" level=info msg="CreateContainer within sandbox \"7506d9ff366790caaf69684e57c924eb792baf72c7efd3713a919f89880fc479\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c\"" Apr 16 23:55:51.106895 containerd[1632]: time="2026-04-16T23:55:51.106872376Z" level=info msg="StartContainer for \"63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c\"" Apr 16 23:55:51.108616 containerd[1632]: time="2026-04-16T23:55:51.108476884Z" level=info msg="connecting to shim 63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c" address="unix:///run/containerd/s/171da9fa88d4461132ac1309df80f327112849dff8b917b6c4bde23956cf4ff3" protocol=ttrpc version=3 Apr 16 23:55:51.110224 containerd[1632]: time="2026-04-16T23:55:51.110202895Z" level=info msg="Container e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:55:51.117390 systemd[1]: Started cri-containerd-bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e.scope - libcontainer container bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e. Apr 16 23:55:51.118550 containerd[1632]: time="2026-04-16T23:55:51.118517554Z" level=info msg="CreateContainer within sandbox \"3b226d8bd14aa610a78aeb94902f384ca91fbddf4ea95acc9e5998a0702dc6d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109\"" Apr 16 23:55:51.119121 containerd[1632]: time="2026-04-16T23:55:51.119101720Z" level=info msg="StartContainer for \"e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109\"" Apr 16 23:55:51.119995 containerd[1632]: time="2026-04-16T23:55:51.119968842Z" level=info msg="connecting to shim e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109" address="unix:///run/containerd/s/245b7ded919aee42a70cec23f7350a28dd277cf782e86b0c5b335c012f39194e" protocol=ttrpc version=3 Apr 16 23:55:51.134410 systemd[1]: Started cri-containerd-63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c.scope - libcontainer container 63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c. Apr 16 23:55:51.145309 systemd[1]: Started cri-containerd-e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109.scope - libcontainer container e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109. Apr 16 23:55:51.203485 kubelet[2414]: I0416 23:55:51.203224 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:51.203485 kubelet[2414]: E0416 23:55:51.203510 2414 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://77.42.25.117:6443/api/v1/nodes\": dial tcp 77.42.25.117:6443: connect: connection refused" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:51.209168 containerd[1632]: time="2026-04-16T23:55:51.208042476Z" level=info msg="StartContainer for \"bbb7b52614eed3b620162e10ed14d0b1c985c02fdab7f66f1f6d1490c79bac7e\" returns successfully" Apr 16 23:55:51.228700 containerd[1632]: time="2026-04-16T23:55:51.228645918Z" level=info msg="StartContainer for \"e2737488a8dd98a9c5d481c48dde18c972bdbbc61c7cbddadaf0b944db7aa109\" returns successfully" Apr 16 23:55:51.238256 containerd[1632]: time="2026-04-16T23:55:51.238120217Z" level=info msg="StartContainer for \"63f0b6863d2876ae8892f780060f564c50388482eaa197b9510ad2d8f8069f2c\" returns successfully" Apr 16 23:55:51.461993 kubelet[2414]: E0416 23:55:51.461890 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:51.462681 kubelet[2414]: E0416 23:55:51.462618 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:51.464958 kubelet[2414]: E0416 23:55:51.464944 2414 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:51.975103 kubelet[2414]: E0416 23:55:51.975058 2414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-391826f4f6\" not found" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.007165 kubelet[2414]: I0416 23:55:52.006178 2414 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.118579 kubelet[2414]: I0416 23:55:52.118515 2414 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.118579 kubelet[2414]: E0416 23:55:52.118563 2414 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-391826f4f6\": node \"ci-4459-2-4-n-391826f4f6\" not found" Apr 16 23:55:52.124104 kubelet[2414]: I0416 23:55:52.124080 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.129654 kubelet[2414]: E0416 23:55:52.129612 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.129654 kubelet[2414]: I0416 23:55:52.129640 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.130770 kubelet[2414]: E0416 23:55:52.130745 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-391826f4f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.130770 kubelet[2414]: I0416 23:55:52.130761 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.131992 kubelet[2414]: E0416 23:55:52.131966 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.403038 kubelet[2414]: I0416 23:55:52.402790 2414 apiserver.go:52] "Watching apiserver" Apr 16 23:55:52.424664 kubelet[2414]: I0416 23:55:52.424554 2414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:55:52.465295 kubelet[2414]: I0416 23:55:52.465221 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.465505 kubelet[2414]: I0416 23:55:52.465221 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.468638 kubelet[2414]: E0416 23:55:52.468586 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-391826f4f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:52.469073 kubelet[2414]: E0416 23:55:52.468641 2414 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:53.773317 kubelet[2414]: I0416 23:55:53.773256 2414 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:54.371430 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-7.scope)... Apr 16 23:55:54.371460 systemd[1]: Reloading... Apr 16 23:55:54.480175 zram_generator::config[2736]: No configuration found. Apr 16 23:55:54.666331 systemd[1]: Reloading finished in 294 ms. Apr 16 23:55:54.695427 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:54.716485 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:55:54.716750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:54.716805 systemd[1]: kubelet.service: Consumed 913ms CPU time, 130.7M memory peak. Apr 16 23:55:54.718944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:55:54.883336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:55:54.895589 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:55:54.935508 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:55:54.935508 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:55:54.935508 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:55:54.935508 kubelet[2787]: I0416 23:55:54.935020 2787 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:55:54.944848 kubelet[2787]: I0416 23:55:54.944809 2787 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:55:54.945082 kubelet[2787]: I0416 23:55:54.944986 2787 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:55:54.945323 kubelet[2787]: I0416 23:55:54.945297 2787 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:55:54.946932 kubelet[2787]: I0416 23:55:54.946914 2787 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:55:54.949000 kubelet[2787]: I0416 23:55:54.948962 2787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:55:54.952636 kubelet[2787]: I0416 23:55:54.952613 2787 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:55:54.957265 kubelet[2787]: I0416 23:55:54.957241 2787 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:55:54.957607 kubelet[2787]: I0416 23:55:54.957478 2787 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:55:54.957659 kubelet[2787]: I0416 23:55:54.957502 2787 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-391826f4f6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:55:54.957659 kubelet[2787]: I0416 23:55:54.957636 2787 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:55:54.957659 kubelet[2787]: I0416 23:55:54.957644 2787 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:55:54.957782 kubelet[2787]: I0416 23:55:54.957687 2787 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:55:54.957867 kubelet[2787]: I0416 23:55:54.957849 2787 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:55:54.957867 kubelet[2787]: I0416 23:55:54.957866 2787 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:55:54.957919 kubelet[2787]: I0416 23:55:54.957889 2787 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:55:54.959551 kubelet[2787]: I0416 23:55:54.959534 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:55:54.967831 kubelet[2787]: I0416 23:55:54.967457 2787 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:55:54.967979 kubelet[2787]: I0416 23:55:54.967918 2787 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:55:54.970394 kubelet[2787]: I0416 23:55:54.970371 2787 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:55:54.970478 kubelet[2787]: I0416 23:55:54.970408 2787 server.go:1289] "Started kubelet" Apr 16 23:55:54.972728 kubelet[2787]: I0416 23:55:54.972709 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:55:54.973724 kubelet[2787]: I0416 23:55:54.973625 2787 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:55:54.973937 kubelet[2787]: I0416 23:55:54.973926 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:55:54.974665 kubelet[2787]: I0416 23:55:54.974644 2787 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:55:54.976723 kubelet[2787]: I0416 23:55:54.976711 2787 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:55:54.978305 kubelet[2787]: I0416 23:55:54.977487 2787 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:55:54.978305 kubelet[2787]: I0416 23:55:54.977629 2787 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:55:54.978305 kubelet[2787]: I0416 23:55:54.977681 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:55:54.978461 kubelet[2787]: I0416 23:55:54.978438 2787 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:55:54.983153 kubelet[2787]: I0416 23:55:54.983096 2787 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:55:54.983153 kubelet[2787]: I0416 23:55:54.983110 2787 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:55:54.983244 kubelet[2787]: I0416 23:55:54.983198 2787 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:55:54.983993 kubelet[2787]: I0416 23:55:54.983980 2787 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:55:54.993325 kubelet[2787]: I0416 23:55:54.993310 2787 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:55:54.993384 kubelet[2787]: I0416 23:55:54.993377 2787 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:55:54.993433 kubelet[2787]: I0416 23:55:54.993427 2787 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:55:54.993470 kubelet[2787]: I0416 23:55:54.993465 2787 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:55:54.993542 kubelet[2787]: E0416 23:55:54.993528 2787 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:55:55.039528 kubelet[2787]: I0416 23:55:55.039507 2787 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039688 2787 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039708 2787 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039831 2787 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039839 2787 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039854 2787 policy_none.go:49] "None policy: Start" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039862 2787 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039871 2787 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:55:55.039987 kubelet[2787]: I0416 23:55:55.039933 2787 state_mem.go:75] "Updated machine memory state" Apr 16 23:55:55.043823 kubelet[2787]: E0416 23:55:55.043802 2787 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:55:55.044070 kubelet[2787]: I0416 23:55:55.044056 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:55:55.044401 kubelet[2787]: I0416 23:55:55.044372 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:55:55.044702 kubelet[2787]: I0416 23:55:55.044689 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:55:55.049187 kubelet[2787]: E0416 23:55:55.048332 2787 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:55:55.095022 kubelet[2787]: I0416 23:55:55.094980 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.095022 kubelet[2787]: I0416 23:55:55.095032 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.095382 kubelet[2787]: I0416 23:55:55.095323 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.103310 kubelet[2787]: E0416 23:55:55.103270 2787 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.156434 kubelet[2787]: I0416 23:55:55.156176 2787 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.165177 kubelet[2787]: I0416 23:55:55.165039 2787 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.165580 kubelet[2787]: I0416 23:55:55.165454 2787 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.179337 kubelet[2787]: I0416 23:55:55.179290 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.179799 kubelet[2787]: I0416 23:55:55.179696 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e5e31f668dafac4a840b3ef7a0c76df-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-391826f4f6\" (UID: \"9e5e31f668dafac4a840b3ef7a0c76df\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.179972 kubelet[2787]: I0416 23:55:55.179947 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.180274 kubelet[2787]: I0416 23:55:55.180110 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.181173 kubelet[2787]: I0416 23:55:55.180506 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.181173 kubelet[2787]: I0416 23:55:55.180534 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.181173 kubelet[2787]: I0416 23:55:55.180563 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9dd38790a2af58db92582025b293e3-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" (UID: \"fe9dd38790a2af58db92582025b293e3\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.181173 kubelet[2787]: I0416 23:55:55.180593 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.181173 kubelet[2787]: I0416 23:55:55.180626 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f718b5c0b3b97cc5628330308f14b5b-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-391826f4f6\" (UID: \"3f718b5c0b3b97cc5628330308f14b5b\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:55.961654 kubelet[2787]: I0416 23:55:55.961617 2787 apiserver.go:52] "Watching apiserver" Apr 16 23:55:55.978215 kubelet[2787]: I0416 23:55:55.978186 2787 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:55:56.021087 kubelet[2787]: I0416 23:55:56.021023 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:56.022018 kubelet[2787]: I0416 23:55:56.021285 2787 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:56.030695 kubelet[2787]: E0416 23:55:56.030670 2787 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-391826f4f6\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:56.030966 kubelet[2787]: E0416 23:55:56.030949 2787 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-391826f4f6\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" Apr 16 23:55:56.045501 kubelet[2787]: I0416 23:55:56.045409 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-391826f4f6" podStartSLOduration=1.045336327 podStartE2EDuration="1.045336327s" podCreationTimestamp="2026-04-16 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:56.044735165 +0000 UTC m=+1.144427296" watchObservedRunningTime="2026-04-16 23:55:56.045336327 +0000 UTC m=+1.145028458" Apr 16 23:55:56.045827 kubelet[2787]: I0416 23:55:56.045585 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-391826f4f6" podStartSLOduration=1.045581084 podStartE2EDuration="1.045581084s" podCreationTimestamp="2026-04-16 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:56.039238976 +0000 UTC m=+1.138931127" watchObservedRunningTime="2026-04-16 23:55:56.045581084 +0000 UTC m=+1.145273215" Apr 16 23:56:00.603775 kubelet[2787]: I0416 23:56:00.603732 2787 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:56:00.604806 kubelet[2787]: I0416 23:56:00.604209 2787 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:56:00.604901 containerd[1632]: time="2026-04-16T23:56:00.604005810Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:56:01.408255 kubelet[2787]: I0416 23:56:01.408008 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-391826f4f6" podStartSLOduration=8.407905409 podStartE2EDuration="8.407905409s" podCreationTimestamp="2026-04-16 23:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:55:56.051546217 +0000 UTC m=+1.151238358" watchObservedRunningTime="2026-04-16 23:56:01.407905409 +0000 UTC m=+6.507640705" Apr 16 23:56:01.419566 kubelet[2787]: I0416 23:56:01.417745 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/29d887fe-90e2-48d8-bb40-353c2e1f5caf-kube-proxy\") pod \"kube-proxy-7bzb8\" (UID: \"29d887fe-90e2-48d8-bb40-353c2e1f5caf\") " pod="kube-system/kube-proxy-7bzb8" Apr 16 23:56:01.419791 kubelet[2787]: I0416 23:56:01.419754 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29d887fe-90e2-48d8-bb40-353c2e1f5caf-xtables-lock\") pod \"kube-proxy-7bzb8\" (UID: \"29d887fe-90e2-48d8-bb40-353c2e1f5caf\") " pod="kube-system/kube-proxy-7bzb8" Apr 16 23:56:01.420335 kubelet[2787]: I0416 23:56:01.420286 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92cz\" (UniqueName: \"kubernetes.io/projected/29d887fe-90e2-48d8-bb40-353c2e1f5caf-kube-api-access-h92cz\") pod \"kube-proxy-7bzb8\" (UID: \"29d887fe-90e2-48d8-bb40-353c2e1f5caf\") " pod="kube-system/kube-proxy-7bzb8" Apr 16 23:56:01.421348 kubelet[2787]: I0416 23:56:01.421316 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29d887fe-90e2-48d8-bb40-353c2e1f5caf-lib-modules\") pod \"kube-proxy-7bzb8\" (UID: \"29d887fe-90e2-48d8-bb40-353c2e1f5caf\") " pod="kube-system/kube-proxy-7bzb8" Apr 16 23:56:01.429319 systemd[1]: Created slice kubepods-besteffort-pod29d887fe_90e2_48d8_bb40_353c2e1f5caf.slice - libcontainer container kubepods-besteffort-pod29d887fe_90e2_48d8_bb40_353c2e1f5caf.slice. Apr 16 23:56:01.602742 systemd[1]: Created slice kubepods-besteffort-pod05311d18_e1b2_489e_a3de_36b970703a8e.slice - libcontainer container kubepods-besteffort-pod05311d18_e1b2_489e_a3de_36b970703a8e.slice. Apr 16 23:56:01.623576 kubelet[2787]: I0416 23:56:01.623550 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfr4\" (UniqueName: \"kubernetes.io/projected/05311d18-e1b2-489e-a3de-36b970703a8e-kube-api-access-bzfr4\") pod \"tigera-operator-6bf85f8dd-b9wr8\" (UID: \"05311d18-e1b2-489e-a3de-36b970703a8e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-b9wr8" Apr 16 23:56:01.623984 kubelet[2787]: I0416 23:56:01.623957 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05311d18-e1b2-489e-a3de-36b970703a8e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-b9wr8\" (UID: \"05311d18-e1b2-489e-a3de-36b970703a8e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-b9wr8" Apr 16 23:56:01.742080 containerd[1632]: time="2026-04-16T23:56:01.741880240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bzb8,Uid:29d887fe-90e2-48d8-bb40-353c2e1f5caf,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:01.767634 containerd[1632]: time="2026-04-16T23:56:01.767577377Z" level=info msg="connecting to shim 7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680" address="unix:///run/containerd/s/f3055aa56a36fc413aa18bd2e76ebf1f4bd52985d81895596875c963810e3c08" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:01.791291 systemd[1]: Started cri-containerd-7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680.scope - libcontainer container 7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680. Apr 16 23:56:01.814925 containerd[1632]: time="2026-04-16T23:56:01.814895043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bzb8,Uid:29d887fe-90e2-48d8-bb40-353c2e1f5caf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680\"" Apr 16 23:56:01.820627 containerd[1632]: time="2026-04-16T23:56:01.820605824Z" level=info msg="CreateContainer within sandbox \"7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:56:01.834767 containerd[1632]: time="2026-04-16T23:56:01.834231293Z" level=info msg="Container 645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:01.841254 containerd[1632]: time="2026-04-16T23:56:01.841230027Z" level=info msg="CreateContainer within sandbox \"7f79cd55ebfc65f4a706443704c28920066ef25d083a107e420c886c6e595680\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2\"" Apr 16 23:56:01.841878 containerd[1632]: time="2026-04-16T23:56:01.841866142Z" level=info msg="StartContainer for \"645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2\"" Apr 16 23:56:01.843101 containerd[1632]: time="2026-04-16T23:56:01.843086013Z" level=info msg="connecting to shim 645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2" address="unix:///run/containerd/s/f3055aa56a36fc413aa18bd2e76ebf1f4bd52985d81895596875c963810e3c08" protocol=ttrpc version=3 Apr 16 23:56:01.861255 systemd[1]: Started cri-containerd-645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2.scope - libcontainer container 645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2. Apr 16 23:56:01.906403 containerd[1632]: time="2026-04-16T23:56:01.906361479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-b9wr8,Uid:05311d18-e1b2-489e-a3de-36b970703a8e,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:56:01.928176 containerd[1632]: time="2026-04-16T23:56:01.928050520Z" level=info msg="connecting to shim dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475" address="unix:///run/containerd/s/73706e664c6e2aea7ae56d57ebee33652482302996501e434f09b08b5c697c95" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:01.935312 containerd[1632]: time="2026-04-16T23:56:01.935268283Z" level=info msg="StartContainer for \"645f7953a4160fcccda47f5fd1e88c69fa94c15d65815f99e873850291f6a2c2\" returns successfully" Apr 16 23:56:01.957252 systemd[1]: Started cri-containerd-dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475.scope - libcontainer container dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475. Apr 16 23:56:02.002681 containerd[1632]: time="2026-04-16T23:56:02.002581389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-b9wr8,Uid:05311d18-e1b2-489e-a3de-36b970703a8e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475\"" Apr 16 23:56:02.005086 containerd[1632]: time="2026-04-16T23:56:02.005052437Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:56:03.329340 kubelet[2787]: I0416 23:56:03.329256 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7bzb8" podStartSLOduration=2.329242997 podStartE2EDuration="2.329242997s" podCreationTimestamp="2026-04-16 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:02.044601434 +0000 UTC m=+7.144293565" watchObservedRunningTime="2026-04-16 23:56:03.329242997 +0000 UTC m=+8.428935128" Apr 16 23:56:03.679538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511908653.mount: Deactivated successfully. Apr 16 23:56:04.410227 containerd[1632]: time="2026-04-16T23:56:04.410174404Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.411388 containerd[1632]: time="2026-04-16T23:56:04.411173694Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 23:56:04.412190 containerd[1632]: time="2026-04-16T23:56:04.412166795Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.413950 containerd[1632]: time="2026-04-16T23:56:04.413924953Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:04.414486 containerd[1632]: time="2026-04-16T23:56:04.414452004Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.409367479s" Apr 16 23:56:04.414546 containerd[1632]: time="2026-04-16T23:56:04.414535069Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 23:56:04.418803 containerd[1632]: time="2026-04-16T23:56:04.418759669Z" level=info msg="CreateContainer within sandbox \"dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:56:04.425798 containerd[1632]: time="2026-04-16T23:56:04.425708298Z" level=info msg="Container 8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:04.433561 containerd[1632]: time="2026-04-16T23:56:04.433526342Z" level=info msg="CreateContainer within sandbox \"dd2a7c71c1bd9d200aacdd12747d0d710e1d4de3d790f20bb7de188ad2377475\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d\"" Apr 16 23:56:04.435202 containerd[1632]: time="2026-04-16T23:56:04.435168687Z" level=info msg="StartContainer for \"8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d\"" Apr 16 23:56:04.435906 containerd[1632]: time="2026-04-16T23:56:04.435881577Z" level=info msg="connecting to shim 8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d" address="unix:///run/containerd/s/73706e664c6e2aea7ae56d57ebee33652482302996501e434f09b08b5c697c95" protocol=ttrpc version=3 Apr 16 23:56:04.458258 systemd[1]: Started cri-containerd-8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d.scope - libcontainer container 8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d. Apr 16 23:56:04.486810 containerd[1632]: time="2026-04-16T23:56:04.486771314Z" level=info msg="StartContainer for \"8f74173aa2a9ade66c3a1f8cd0ff88874215e73f2da10f03ca12a3679975045d\" returns successfully" Apr 16 23:56:05.064247 kubelet[2787]: I0416 23:56:05.063805 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-b9wr8" podStartSLOduration=1.652618646 podStartE2EDuration="4.063626968s" podCreationTimestamp="2026-04-16 23:56:01 +0000 UTC" firstStartedPulling="2026-04-16 23:56:02.004370213 +0000 UTC m=+7.104062344" lastFinishedPulling="2026-04-16 23:56:04.415378535 +0000 UTC m=+9.515070666" observedRunningTime="2026-04-16 23:56:05.05936427 +0000 UTC m=+10.159056441" watchObservedRunningTime="2026-04-16 23:56:05.063626968 +0000 UTC m=+10.163319149" Apr 16 23:56:07.662519 systemd-timesyncd[1519]: Contacted time server 131.188.3.221:123 (2.flatcar.pool.ntp.org). Apr 16 23:56:07.662913 systemd-timesyncd[1519]: Initial clock synchronization to Thu 2026-04-16 23:56:07.471161 UTC. Apr 16 23:56:09.539432 sudo[1851]: pam_unix(sudo:session): session closed for user root Apr 16 23:56:09.568739 sshd[1850]: Connection closed by 4.175.71.9 port 37298 Apr 16 23:56:09.571268 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Apr 16 23:56:09.577089 systemd-logind[1616]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:56:09.579603 systemd[1]: sshd@6-77.42.25.117:22-4.175.71.9:37298.service: Deactivated successfully. Apr 16 23:56:09.585590 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:56:09.585892 systemd[1]: session-7.scope: Consumed 4.221s CPU time, 228.4M memory peak. Apr 16 23:56:09.588365 systemd-logind[1616]: Removed session 7. Apr 16 23:56:11.214180 systemd[1]: Created slice kubepods-besteffort-pod456639f0_fc2d_45e0_a7fc_6d9b9740dd4f.slice - libcontainer container kubepods-besteffort-pod456639f0_fc2d_45e0_a7fc_6d9b9740dd4f.slice. Apr 16 23:56:11.266222 systemd[1]: Created slice kubepods-besteffort-pod7a690be3_8072_45de_8e42_9105a90dc10d.slice - libcontainer container kubepods-besteffort-pod7a690be3_8072_45de_8e42_9105a90dc10d.slice. Apr 16 23:56:11.289467 kubelet[2787]: I0416 23:56:11.289428 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-flexvol-driver-host\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290382 kubelet[2787]: I0416 23:56:11.289856 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-var-run-calico\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290382 kubelet[2787]: I0416 23:56:11.289878 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456639f0-fc2d-45e0-a7fc-6d9b9740dd4f-tigera-ca-bundle\") pod \"calico-typha-56678c49fb-jxcb5\" (UID: \"456639f0-fc2d-45e0-a7fc-6d9b9740dd4f\") " pod="calico-system/calico-typha-56678c49fb-jxcb5" Apr 16 23:56:11.290382 kubelet[2787]: I0416 23:56:11.289892 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9bd6\" (UniqueName: \"kubernetes.io/projected/456639f0-fc2d-45e0-a7fc-6d9b9740dd4f-kube-api-access-q9bd6\") pod \"calico-typha-56678c49fb-jxcb5\" (UID: \"456639f0-fc2d-45e0-a7fc-6d9b9740dd4f\") " pod="calico-system/calico-typha-56678c49fb-jxcb5" Apr 16 23:56:11.290382 kubelet[2787]: I0416 23:56:11.290152 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-policysync\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290382 kubelet[2787]: I0416 23:56:11.290168 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-var-lib-calico\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290511 kubelet[2787]: I0416 23:56:11.290179 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-bpffs\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290511 kubelet[2787]: I0416 23:56:11.290191 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2db\" (UniqueName: \"kubernetes.io/projected/7a690be3-8072-45de-8e42-9105a90dc10d-kube-api-access-pj2db\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290511 kubelet[2787]: I0416 23:56:11.290202 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a690be3-8072-45de-8e42-9105a90dc10d-node-certs\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290511 kubelet[2787]: I0416 23:56:11.290232 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-nodeproc\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290511 kubelet[2787]: I0416 23:56:11.290245 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-sys-fs\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290594 kubelet[2787]: I0416 23:56:11.290257 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/456639f0-fc2d-45e0-a7fc-6d9b9740dd4f-typha-certs\") pod \"calico-typha-56678c49fb-jxcb5\" (UID: \"456639f0-fc2d-45e0-a7fc-6d9b9740dd4f\") " pod="calico-system/calico-typha-56678c49fb-jxcb5" Apr 16 23:56:11.290594 kubelet[2787]: I0416 23:56:11.290267 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-cni-bin-dir\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290594 kubelet[2787]: I0416 23:56:11.290278 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-cni-log-dir\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290594 kubelet[2787]: I0416 23:56:11.290290 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a690be3-8072-45de-8e42-9105a90dc10d-tigera-ca-bundle\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290594 kubelet[2787]: I0416 23:56:11.290314 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-xtables-lock\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290676 kubelet[2787]: I0416 23:56:11.290324 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-lib-modules\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.290676 kubelet[2787]: I0416 23:56:11.290334 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a690be3-8072-45de-8e42-9105a90dc10d-cni-net-dir\") pod \"calico-node-zl4h8\" (UID: \"7a690be3-8072-45de-8e42-9105a90dc10d\") " pod="calico-system/calico-node-zl4h8" Apr 16 23:56:11.369233 kubelet[2787]: E0416 23:56:11.368844 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:11.391146 kubelet[2787]: I0416 23:56:11.391056 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03d6db92-84bd-442c-8aee-ce624ac6a17d-registration-dir\") pod \"csi-node-driver-nbzc2\" (UID: \"03d6db92-84bd-442c-8aee-ce624ac6a17d\") " pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:11.392045 kubelet[2787]: I0416 23:56:11.391777 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03d6db92-84bd-442c-8aee-ce624ac6a17d-socket-dir\") pod \"csi-node-driver-nbzc2\" (UID: \"03d6db92-84bd-442c-8aee-ce624ac6a17d\") " pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:11.392045 kubelet[2787]: I0416 23:56:11.391794 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sps\" (UniqueName: \"kubernetes.io/projected/03d6db92-84bd-442c-8aee-ce624ac6a17d-kube-api-access-49sps\") pod \"csi-node-driver-nbzc2\" (UID: \"03d6db92-84bd-442c-8aee-ce624ac6a17d\") " pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:11.392045 kubelet[2787]: I0416 23:56:11.391828 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/03d6db92-84bd-442c-8aee-ce624ac6a17d-varrun\") pod \"csi-node-driver-nbzc2\" (UID: \"03d6db92-84bd-442c-8aee-ce624ac6a17d\") " pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:11.392045 kubelet[2787]: I0416 23:56:11.391881 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03d6db92-84bd-442c-8aee-ce624ac6a17d-kubelet-dir\") pod \"csi-node-driver-nbzc2\" (UID: \"03d6db92-84bd-442c-8aee-ce624ac6a17d\") " pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:11.408495 kubelet[2787]: E0416 23:56:11.408471 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.408683 kubelet[2787]: W0416 23:56:11.408672 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.408987 kubelet[2787]: E0416 23:56:11.408912 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.409371 kubelet[2787]: E0416 23:56:11.409353 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.422881 kubelet[2787]: W0416 23:56:11.419689 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.424396 kubelet[2787]: E0416 23:56:11.424275 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.427063 kubelet[2787]: E0416 23:56:11.426884 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.427935 kubelet[2787]: W0416 23:56:11.427863 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.428549 kubelet[2787]: E0416 23:56:11.428480 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.430050 kubelet[2787]: E0416 23:56:11.430025 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.430050 kubelet[2787]: W0416 23:56:11.430041 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.430111 kubelet[2787]: E0416 23:56:11.430053 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.430238 kubelet[2787]: E0416 23:56:11.430224 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.430238 kubelet[2787]: W0416 23:56:11.430234 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.430275 kubelet[2787]: E0416 23:56:11.430240 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430370 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431138 kubelet[2787]: W0416 23:56:11.430376 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430382 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430503 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431138 kubelet[2787]: W0416 23:56:11.430508 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430513 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430668 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431138 kubelet[2787]: W0416 23:56:11.430673 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430682 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431138 kubelet[2787]: E0416 23:56:11.430803 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431317 kubelet[2787]: W0416 23:56:11.430810 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431317 kubelet[2787]: E0416 23:56:11.430816 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431317 kubelet[2787]: E0416 23:56:11.430937 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431317 kubelet[2787]: W0416 23:56:11.430942 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431317 kubelet[2787]: E0416 23:56:11.430947 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431317 kubelet[2787]: E0416 23:56:11.431205 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431317 kubelet[2787]: W0416 23:56:11.431211 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431317 kubelet[2787]: E0416 23:56:11.431220 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.431433 kubelet[2787]: E0416 23:56:11.431371 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.431433 kubelet[2787]: W0416 23:56:11.431376 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.431433 kubelet[2787]: E0416 23:56:11.431381 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.433194 kubelet[2787]: E0416 23:56:11.431516 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.433194 kubelet[2787]: W0416 23:56:11.431523 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.433194 kubelet[2787]: E0416 23:56:11.431528 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.433194 kubelet[2787]: E0416 23:56:11.432903 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.433194 kubelet[2787]: W0416 23:56:11.432912 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.433194 kubelet[2787]: E0416 23:56:11.432920 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.493868 kubelet[2787]: E0416 23:56:11.493694 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.493868 kubelet[2787]: W0416 23:56:11.493711 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.493868 kubelet[2787]: E0416 23:56:11.493723 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.493971 kubelet[2787]: E0416 23:56:11.493881 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.493971 kubelet[2787]: W0416 23:56:11.493886 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.493971 kubelet[2787]: E0416 23:56:11.493892 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494252 kubelet[2787]: E0416 23:56:11.494063 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494252 kubelet[2787]: W0416 23:56:11.494071 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.494252 kubelet[2787]: E0416 23:56:11.494077 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494252 kubelet[2787]: E0416 23:56:11.494250 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494664 kubelet[2787]: W0416 23:56:11.494256 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.494664 kubelet[2787]: E0416 23:56:11.494261 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494664 kubelet[2787]: E0416 23:56:11.494428 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494664 kubelet[2787]: W0416 23:56:11.494433 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.494664 kubelet[2787]: E0416 23:56:11.494439 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494664 kubelet[2787]: E0416 23:56:11.494613 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494664 kubelet[2787]: W0416 23:56:11.494618 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.494664 kubelet[2787]: E0416 23:56:11.494624 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494949 kubelet[2787]: E0416 23:56:11.494763 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494949 kubelet[2787]: W0416 23:56:11.494770 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.494949 kubelet[2787]: E0416 23:56:11.494777 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.494949 kubelet[2787]: E0416 23:56:11.494941 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.494949 kubelet[2787]: W0416 23:56:11.494948 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495352 kubelet[2787]: E0416 23:56:11.494954 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.495352 kubelet[2787]: E0416 23:56:11.495119 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.495352 kubelet[2787]: W0416 23:56:11.495136 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495352 kubelet[2787]: E0416 23:56:11.495141 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.495352 kubelet[2787]: E0416 23:56:11.495275 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.495352 kubelet[2787]: W0416 23:56:11.495279 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495352 kubelet[2787]: E0416 23:56:11.495285 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.495686 kubelet[2787]: E0416 23:56:11.495396 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.495686 kubelet[2787]: W0416 23:56:11.495401 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495686 kubelet[2787]: E0416 23:56:11.495405 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.495686 kubelet[2787]: E0416 23:56:11.495519 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.495686 kubelet[2787]: W0416 23:56:11.495524 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495686 kubelet[2787]: E0416 23:56:11.495528 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.495896 kubelet[2787]: E0416 23:56:11.495788 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.495896 kubelet[2787]: W0416 23:56:11.495794 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.495896 kubelet[2787]: E0416 23:56:11.495800 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496010 kubelet[2787]: E0416 23:56:11.495938 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496010 kubelet[2787]: W0416 23:56:11.495943 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496010 kubelet[2787]: E0416 23:56:11.495948 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496184 kubelet[2787]: E0416 23:56:11.496063 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496184 kubelet[2787]: W0416 23:56:11.496068 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496184 kubelet[2787]: E0416 23:56:11.496072 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496241 kubelet[2787]: E0416 23:56:11.496235 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496241 kubelet[2787]: W0416 23:56:11.496240 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496367 kubelet[2787]: E0416 23:56:11.496245 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496412 kubelet[2787]: E0416 23:56:11.496383 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496412 kubelet[2787]: W0416 23:56:11.496388 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496412 kubelet[2787]: E0416 23:56:11.496393 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496575 kubelet[2787]: E0416 23:56:11.496557 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496595 kubelet[2787]: W0416 23:56:11.496575 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496595 kubelet[2787]: E0416 23:56:11.496581 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496753 kubelet[2787]: E0416 23:56:11.496743 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496753 kubelet[2787]: W0416 23:56:11.496751 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496788 kubelet[2787]: E0416 23:56:11.496757 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.496927 kubelet[2787]: E0416 23:56:11.496906 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.496927 kubelet[2787]: W0416 23:56:11.496912 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.496927 kubelet[2787]: E0416 23:56:11.496918 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.497077 kubelet[2787]: E0416 23:56:11.497067 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.497077 kubelet[2787]: W0416 23:56:11.497076 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.497185 kubelet[2787]: E0416 23:56:11.497082 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.497276 kubelet[2787]: E0416 23:56:11.497243 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.497276 kubelet[2787]: W0416 23:56:11.497251 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.497276 kubelet[2787]: E0416 23:56:11.497256 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497596 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.498150 kubelet[2787]: W0416 23:56:11.497604 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497611 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497748 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.498150 kubelet[2787]: W0416 23:56:11.497753 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497758 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497903 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.498150 kubelet[2787]: W0416 23:56:11.497907 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.498150 kubelet[2787]: E0416 23:56:11.497913 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.505645 kubelet[2787]: E0416 23:56:11.505631 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:11.505645 kubelet[2787]: W0416 23:56:11.505642 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:11.505710 kubelet[2787]: E0416 23:56:11.505650 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:11.518109 containerd[1632]: time="2026-04-16T23:56:11.518070753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56678c49fb-jxcb5,Uid:456639f0-fc2d-45e0-a7fc-6d9b9740dd4f,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:11.532599 containerd[1632]: time="2026-04-16T23:56:11.532515143Z" level=info msg="connecting to shim 48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20" address="unix:///run/containerd/s/43873828bb0731e300f9541db366f8ec24c96a92ae46c53f84beed8dd4830066" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:11.555248 systemd[1]: Started cri-containerd-48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20.scope - libcontainer container 48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20. Apr 16 23:56:11.570738 containerd[1632]: time="2026-04-16T23:56:11.570702056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zl4h8,Uid:7a690be3-8072-45de-8e42-9105a90dc10d,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:11.591655 containerd[1632]: time="2026-04-16T23:56:11.591592468Z" level=info msg="connecting to shim c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb" address="unix:///run/containerd/s/063a8ceec32c8fa359257b6c9d6e41009e9552517ce6ed2e9249a14de154a908" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:11.601997 containerd[1632]: time="2026-04-16T23:56:11.601717487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56678c49fb-jxcb5,Uid:456639f0-fc2d-45e0-a7fc-6d9b9740dd4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20\"" Apr 16 23:56:11.603855 containerd[1632]: time="2026-04-16T23:56:11.603349144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:56:11.616244 systemd[1]: Started cri-containerd-c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb.scope - libcontainer container c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb. Apr 16 23:56:11.642740 containerd[1632]: time="2026-04-16T23:56:11.642676739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zl4h8,Uid:7a690be3-8072-45de-8e42-9105a90dc10d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\"" Apr 16 23:56:12.994648 kubelet[2787]: E0416 23:56:12.994569 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:13.369285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918940813.mount: Deactivated successfully. Apr 16 23:56:14.983336 containerd[1632]: time="2026-04-16T23:56:14.983294196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:14.984379 containerd[1632]: time="2026-04-16T23:56:14.984243450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 23:56:14.985307 containerd[1632]: time="2026-04-16T23:56:14.985291296Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:14.986964 containerd[1632]: time="2026-04-16T23:56:14.986947998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:14.987384 containerd[1632]: time="2026-04-16T23:56:14.987362516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.383992591s" Apr 16 23:56:14.987412 containerd[1632]: time="2026-04-16T23:56:14.987386423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 23:56:14.989836 containerd[1632]: time="2026-04-16T23:56:14.989812485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:56:14.999393 kubelet[2787]: E0416 23:56:14.999362 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:15.002574 containerd[1632]: time="2026-04-16T23:56:15.002547135Z" level=info msg="CreateContainer within sandbox \"48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:56:15.010276 containerd[1632]: time="2026-04-16T23:56:15.009231087Z" level=info msg="Container deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:15.026515 containerd[1632]: time="2026-04-16T23:56:15.026486074Z" level=info msg="CreateContainer within sandbox \"48fca80e97a5f433f4d84661a4e2113b4e4c604e9f29d78e65ed2800f79c4a20\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c\"" Apr 16 23:56:15.026859 containerd[1632]: time="2026-04-16T23:56:15.026843948Z" level=info msg="StartContainer for \"deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c\"" Apr 16 23:56:15.028009 containerd[1632]: time="2026-04-16T23:56:15.027747928Z" level=info msg="connecting to shim deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c" address="unix:///run/containerd/s/43873828bb0731e300f9541db366f8ec24c96a92ae46c53f84beed8dd4830066" protocol=ttrpc version=3 Apr 16 23:56:15.048258 systemd[1]: Started cri-containerd-deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c.scope - libcontainer container deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c. Apr 16 23:56:15.092898 containerd[1632]: time="2026-04-16T23:56:15.092841397Z" level=info msg="StartContainer for \"deabd8bf798570a88ee79284244d49ce514a1c3eeccab295e965d18b6a0f072c\" returns successfully" Apr 16 23:56:15.647290 update_engine[1618]: I20260416 23:56:15.647186 1618 update_attempter.cc:509] Updating boot flags... Apr 16 23:56:16.079087 kubelet[2787]: I0416 23:56:16.079024 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56678c49fb-jxcb5" podStartSLOduration=1.694109716 podStartE2EDuration="5.079005996s" podCreationTimestamp="2026-04-16 23:56:11 +0000 UTC" firstStartedPulling="2026-04-16 23:56:11.603154541 +0000 UTC m=+16.702846682" lastFinishedPulling="2026-04-16 23:56:14.988050822 +0000 UTC m=+20.087742962" observedRunningTime="2026-04-16 23:56:16.078275833 +0000 UTC m=+21.177968004" watchObservedRunningTime="2026-04-16 23:56:16.079005996 +0000 UTC m=+21.178698176" Apr 16 23:56:16.108551 kubelet[2787]: E0416 23:56:16.108522 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.108551 kubelet[2787]: W0416 23:56:16.108536 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.108551 kubelet[2787]: E0416 23:56:16.108551 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.108803 kubelet[2787]: E0416 23:56:16.108696 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.108803 kubelet[2787]: W0416 23:56:16.108702 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.108803 kubelet[2787]: E0416 23:56:16.108708 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.108937 kubelet[2787]: E0416 23:56:16.108858 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.108937 kubelet[2787]: W0416 23:56:16.108863 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.108937 kubelet[2787]: E0416 23:56:16.108896 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109104 kubelet[2787]: E0416 23:56:16.109081 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109104 kubelet[2787]: W0416 23:56:16.109089 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109104 kubelet[2787]: E0416 23:56:16.109094 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109364 kubelet[2787]: E0416 23:56:16.109260 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109364 kubelet[2787]: W0416 23:56:16.109267 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109364 kubelet[2787]: E0416 23:56:16.109273 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109525 kubelet[2787]: E0416 23:56:16.109421 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109525 kubelet[2787]: W0416 23:56:16.109425 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109525 kubelet[2787]: E0416 23:56:16.109431 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109690 kubelet[2787]: E0416 23:56:16.109587 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109690 kubelet[2787]: W0416 23:56:16.109592 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109690 kubelet[2787]: E0416 23:56:16.109598 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109847 kubelet[2787]: E0416 23:56:16.109721 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109847 kubelet[2787]: W0416 23:56:16.109727 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109847 kubelet[2787]: E0416 23:56:16.109733 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.109998 kubelet[2787]: E0416 23:56:16.109892 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.109998 kubelet[2787]: W0416 23:56:16.109898 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.109998 kubelet[2787]: E0416 23:56:16.109903 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110053 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110472 kubelet[2787]: W0416 23:56:16.110058 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110063 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110222 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110472 kubelet[2787]: W0416 23:56:16.110227 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110233 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110371 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110472 kubelet[2787]: W0416 23:56:16.110376 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110472 kubelet[2787]: E0416 23:56:16.110381 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110514 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110913 kubelet[2787]: W0416 23:56:16.110518 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110524 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110687 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110913 kubelet[2787]: W0416 23:56:16.110692 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110698 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110879 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.110913 kubelet[2787]: W0416 23:56:16.110899 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.110913 kubelet[2787]: E0416 23:56:16.110905 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.129427 kubelet[2787]: E0416 23:56:16.129372 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.129427 kubelet[2787]: W0416 23:56:16.129412 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.129427 kubelet[2787]: E0416 23:56:16.129423 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.129711 kubelet[2787]: E0416 23:56:16.129694 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.129711 kubelet[2787]: W0416 23:56:16.129701 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.129711 kubelet[2787]: E0416 23:56:16.129708 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.129930 kubelet[2787]: E0416 23:56:16.129899 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.129930 kubelet[2787]: W0416 23:56:16.129922 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.129930 kubelet[2787]: E0416 23:56:16.129928 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.130178 kubelet[2787]: E0416 23:56:16.130153 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.130178 kubelet[2787]: W0416 23:56:16.130161 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.130178 kubelet[2787]: E0416 23:56:16.130168 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.130419 kubelet[2787]: E0416 23:56:16.130340 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.130419 kubelet[2787]: W0416 23:56:16.130346 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.130419 kubelet[2787]: E0416 23:56:16.130353 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.130613 kubelet[2787]: E0416 23:56:16.130495 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.130613 kubelet[2787]: W0416 23:56:16.130500 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.130613 kubelet[2787]: E0416 23:56:16.130506 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130682 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131191 kubelet[2787]: W0416 23:56:16.130688 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130694 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130839 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131191 kubelet[2787]: W0416 23:56:16.130844 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130849 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130970 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131191 kubelet[2787]: W0416 23:56:16.130974 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.130979 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131191 kubelet[2787]: E0416 23:56:16.131095 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131594 kubelet[2787]: W0416 23:56:16.131100 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131594 kubelet[2787]: E0416 23:56:16.131105 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131594 kubelet[2787]: E0416 23:56:16.131250 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131594 kubelet[2787]: W0416 23:56:16.131255 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131594 kubelet[2787]: E0416 23:56:16.131260 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131594 kubelet[2787]: E0416 23:56:16.131421 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131594 kubelet[2787]: W0416 23:56:16.131426 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131594 kubelet[2787]: E0416 23:56:16.131431 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.131883 kubelet[2787]: E0416 23:56:16.131801 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.131883 kubelet[2787]: W0416 23:56:16.131808 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.131883 kubelet[2787]: E0416 23:56:16.131814 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.132080 kubelet[2787]: E0416 23:56:16.131952 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.132080 kubelet[2787]: W0416 23:56:16.131957 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.132080 kubelet[2787]: E0416 23:56:16.131963 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.132259 kubelet[2787]: E0416 23:56:16.132086 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.132259 kubelet[2787]: W0416 23:56:16.132091 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.132259 kubelet[2787]: E0416 23:56:16.132121 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.132590 kubelet[2787]: E0416 23:56:16.132294 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.132590 kubelet[2787]: W0416 23:56:16.132300 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.132590 kubelet[2787]: E0416 23:56:16.132305 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.133013 kubelet[2787]: E0416 23:56:16.132685 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.133013 kubelet[2787]: W0416 23:56:16.132702 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.133013 kubelet[2787]: E0416 23:56:16.132720 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.133246 kubelet[2787]: E0416 23:56:16.133223 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:56:16.133246 kubelet[2787]: W0416 23:56:16.133243 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:56:16.133312 kubelet[2787]: E0416 23:56:16.133258 2787 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:56:16.623848 containerd[1632]: time="2026-04-16T23:56:16.623809341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:16.625400 containerd[1632]: time="2026-04-16T23:56:16.625345404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 23:56:16.626753 containerd[1632]: time="2026-04-16T23:56:16.626722611Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:16.628699 containerd[1632]: time="2026-04-16T23:56:16.628670233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:16.629274 containerd[1632]: time="2026-04-16T23:56:16.629241282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.639405313s" Apr 16 23:56:16.629302 containerd[1632]: time="2026-04-16T23:56:16.629273255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 23:56:16.632201 containerd[1632]: time="2026-04-16T23:56:16.632177120Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:56:16.641340 containerd[1632]: time="2026-04-16T23:56:16.641320089Z" level=info msg="Container 6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:16.645375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098179541.mount: Deactivated successfully. Apr 16 23:56:16.648308 containerd[1632]: time="2026-04-16T23:56:16.648290328Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3\"" Apr 16 23:56:16.648845 containerd[1632]: time="2026-04-16T23:56:16.648823046Z" level=info msg="StartContainer for \"6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3\"" Apr 16 23:56:16.649865 containerd[1632]: time="2026-04-16T23:56:16.649820194Z" level=info msg="connecting to shim 6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3" address="unix:///run/containerd/s/063a8ceec32c8fa359257b6c9d6e41009e9552517ce6ed2e9249a14de154a908" protocol=ttrpc version=3 Apr 16 23:56:16.672270 systemd[1]: Started cri-containerd-6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3.scope - libcontainer container 6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3. Apr 16 23:56:16.722875 containerd[1632]: time="2026-04-16T23:56:16.722807806Z" level=info msg="StartContainer for \"6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3\" returns successfully" Apr 16 23:56:16.736384 systemd[1]: cri-containerd-6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3.scope: Deactivated successfully. Apr 16 23:56:16.740075 containerd[1632]: time="2026-04-16T23:56:16.740020718Z" level=info msg="received container exit event container_id:\"6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3\" id:\"6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3\" pid:3446 exited_at:{seconds:1776383776 nanos:739655717}" Apr 16 23:56:16.757639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da520662752034e19bcb8ca33130ec1a64c105502416ad132776fdd66cad5d3-rootfs.mount: Deactivated successfully. Apr 16 23:56:16.997350 kubelet[2787]: E0416 23:56:16.994306 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:17.073860 kubelet[2787]: I0416 23:56:17.073354 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:17.075068 containerd[1632]: time="2026-04-16T23:56:17.075046425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:56:18.995141 kubelet[2787]: E0416 23:56:18.994525 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:20.768180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114535922.mount: Deactivated successfully. Apr 16 23:56:20.829957 containerd[1632]: time="2026-04-16T23:56:20.829911789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:20.832027 containerd[1632]: time="2026-04-16T23:56:20.832005276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 23:56:20.834244 containerd[1632]: time="2026-04-16T23:56:20.833821814Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:20.841342 containerd[1632]: time="2026-04-16T23:56:20.840426288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:20.841517 containerd[1632]: time="2026-04-16T23:56:20.840677612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.76547628s" Apr 16 23:56:20.841545 containerd[1632]: time="2026-04-16T23:56:20.841520619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 23:56:20.845543 containerd[1632]: time="2026-04-16T23:56:20.845516269Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:56:20.855903 containerd[1632]: time="2026-04-16T23:56:20.855153576Z" level=info msg="Container a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:20.871565 containerd[1632]: time="2026-04-16T23:56:20.871540374Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca\"" Apr 16 23:56:20.872155 containerd[1632]: time="2026-04-16T23:56:20.871897276Z" level=info msg="StartContainer for \"a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca\"" Apr 16 23:56:20.873050 containerd[1632]: time="2026-04-16T23:56:20.873036408Z" level=info msg="connecting to shim a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca" address="unix:///run/containerd/s/063a8ceec32c8fa359257b6c9d6e41009e9552517ce6ed2e9249a14de154a908" protocol=ttrpc version=3 Apr 16 23:56:20.894323 systemd[1]: Started cri-containerd-a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca.scope - libcontainer container a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca. Apr 16 23:56:20.944555 containerd[1632]: time="2026-04-16T23:56:20.944523534Z" level=info msg="StartContainer for \"a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca\" returns successfully" Apr 16 23:56:20.979260 systemd[1]: cri-containerd-a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca.scope: Deactivated successfully. Apr 16 23:56:20.982954 containerd[1632]: time="2026-04-16T23:56:20.982929727Z" level=info msg="received container exit event container_id:\"a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca\" id:\"a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca\" pid:3501 exited_at:{seconds:1776383780 nanos:982284793}" Apr 16 23:56:20.995629 kubelet[2787]: E0416 23:56:20.995540 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:21.005004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a69560826ce2fd4dcb67adec5ff81745c5da50eba7b8b788f2258939571aefca-rootfs.mount: Deactivated successfully. Apr 16 23:56:21.088749 containerd[1632]: time="2026-04-16T23:56:21.088388327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:56:22.994867 kubelet[2787]: E0416 23:56:22.994829 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:23.749896 containerd[1632]: time="2026-04-16T23:56:23.749827147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:23.750846 containerd[1632]: time="2026-04-16T23:56:23.750750282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 23:56:23.751705 containerd[1632]: time="2026-04-16T23:56:23.751682861Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:23.754057 containerd[1632]: time="2026-04-16T23:56:23.754011119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:23.756169 containerd[1632]: time="2026-04-16T23:56:23.755362903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.666924667s" Apr 16 23:56:23.756169 containerd[1632]: time="2026-04-16T23:56:23.755388689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 23:56:23.760498 containerd[1632]: time="2026-04-16T23:56:23.760454787Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:56:23.770050 containerd[1632]: time="2026-04-16T23:56:23.769896625Z" level=info msg="Container 46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:23.784014 containerd[1632]: time="2026-04-16T23:56:23.783973087Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed\"" Apr 16 23:56:23.785114 containerd[1632]: time="2026-04-16T23:56:23.785087571Z" level=info msg="StartContainer for \"46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed\"" Apr 16 23:56:23.786344 containerd[1632]: time="2026-04-16T23:56:23.786319972Z" level=info msg="connecting to shim 46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed" address="unix:///run/containerd/s/063a8ceec32c8fa359257b6c9d6e41009e9552517ce6ed2e9249a14de154a908" protocol=ttrpc version=3 Apr 16 23:56:23.829249 systemd[1]: Started cri-containerd-46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed.scope - libcontainer container 46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed. Apr 16 23:56:23.888513 containerd[1632]: time="2026-04-16T23:56:23.888463361Z" level=info msg="StartContainer for \"46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed\" returns successfully" Apr 16 23:56:24.335876 containerd[1632]: time="2026-04-16T23:56:24.335791970Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:56:24.338987 systemd[1]: cri-containerd-46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed.scope: Deactivated successfully. Apr 16 23:56:24.339574 systemd[1]: cri-containerd-46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed.scope: Consumed 435ms CPU time, 193M memory peak, 2.7M read from disk, 177M written to disk. Apr 16 23:56:24.341769 containerd[1632]: time="2026-04-16T23:56:24.341292773Z" level=info msg="received container exit event container_id:\"46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed\" id:\"46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed\" pid:3557 exited_at:{seconds:1776383784 nanos:340974034}" Apr 16 23:56:24.362813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46973bdb28ce297e23ad3bb302509f2e3f165c665712be2f627abe5c9a3677ed-rootfs.mount: Deactivated successfully. Apr 16 23:56:24.423298 kubelet[2787]: I0416 23:56:24.423022 2787 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 23:56:24.463630 systemd[1]: Created slice kubepods-burstable-pod6c71639d_8d3c_4690_91b5_1523b8296aba.slice - libcontainer container kubepods-burstable-pod6c71639d_8d3c_4690_91b5_1523b8296aba.slice. Apr 16 23:56:24.473487 systemd[1]: Created slice kubepods-burstable-pod0d809911_9317_452b_a955_9e9b28c4a3f0.slice - libcontainer container kubepods-burstable-pod0d809911_9317_452b_a955_9e9b28c4a3f0.slice. Apr 16 23:56:24.485478 systemd[1]: Created slice kubepods-besteffort-pod0ef4b04a_66dc_4a7a_9538_58d3c30cdae3.slice - libcontainer container kubepods-besteffort-pod0ef4b04a_66dc_4a7a_9538_58d3c30cdae3.slice. Apr 16 23:56:24.492385 systemd[1]: Created slice kubepods-besteffort-pod788ceb96_f215_499c_9d50_ef5dc95ae426.slice - libcontainer container kubepods-besteffort-pod788ceb96_f215_499c_9d50_ef5dc95ae426.slice. Apr 16 23:56:24.494119 kubelet[2787]: I0416 23:56:24.493871 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7d33848-0070-4073-8ca8-c3a5995310bd-calico-apiserver-certs\") pod \"calico-apiserver-564655bcc6-rnsgt\" (UID: \"b7d33848-0070-4073-8ca8-c3a5995310bd\") " pod="calico-system/calico-apiserver-564655bcc6-rnsgt" Apr 16 23:56:24.494119 kubelet[2787]: I0416 23:56:24.493917 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-nginx-config\") pod \"whisker-56975d45d4-jrkcs\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.494256 kubelet[2787]: I0416 23:56:24.494228 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpvh\" (UniqueName: \"kubernetes.io/projected/5d8fc39f-6b99-42c2-8941-a9e5c4ae0512-kube-api-access-fhpvh\") pod \"goldmane-5b85766d88-wdtnt\" (UID: \"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512\") " pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.494340 kubelet[2787]: I0416 23:56:24.494331 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjf9\" (UniqueName: \"kubernetes.io/projected/0d809911-9317-452b-a955-9e9b28c4a3f0-kube-api-access-kdjf9\") pod \"coredns-674b8bbfcf-lvjxk\" (UID: \"0d809911-9317-452b-a955-9e9b28c4a3f0\") " pod="kube-system/coredns-674b8bbfcf-lvjxk" Apr 16 23:56:24.494448 kubelet[2787]: I0416 23:56:24.494415 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rnh\" (UniqueName: \"kubernetes.io/projected/6c71639d-8d3c-4690-91b5-1523b8296aba-kube-api-access-99rnh\") pod \"coredns-674b8bbfcf-pclrw\" (UID: \"6c71639d-8d3c-4690-91b5-1523b8296aba\") " pod="kube-system/coredns-674b8bbfcf-pclrw" Apr 16 23:56:24.494584 kubelet[2787]: I0416 23:56:24.494510 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/788ceb96-f215-499c-9d50-ef5dc95ae426-calico-apiserver-certs\") pod \"calico-apiserver-564655bcc6-zq72f\" (UID: \"788ceb96-f215-499c-9d50-ef5dc95ae426\") " pod="calico-system/calico-apiserver-564655bcc6-zq72f" Apr 16 23:56:24.494584 kubelet[2787]: I0416 23:56:24.494525 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c71639d-8d3c-4690-91b5-1523b8296aba-config-volume\") pod \"coredns-674b8bbfcf-pclrw\" (UID: \"6c71639d-8d3c-4690-91b5-1523b8296aba\") " pod="kube-system/coredns-674b8bbfcf-pclrw" Apr 16 23:56:24.494584 kubelet[2787]: I0416 23:56:24.494540 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5d8fc39f-6b99-42c2-8941-a9e5c4ae0512-goldmane-key-pair\") pod \"goldmane-5b85766d88-wdtnt\" (UID: \"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512\") " pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.494740 kubelet[2787]: I0416 23:56:24.494719 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcg92\" (UniqueName: \"kubernetes.io/projected/b7d33848-0070-4073-8ca8-c3a5995310bd-kube-api-access-wcg92\") pod \"calico-apiserver-564655bcc6-rnsgt\" (UID: \"b7d33848-0070-4073-8ca8-c3a5995310bd\") " pod="calico-system/calico-apiserver-564655bcc6-rnsgt" Apr 16 23:56:24.494995 kubelet[2787]: I0416 23:56:24.494948 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sps7\" (UniqueName: \"kubernetes.io/projected/9ffd77c1-76d6-4bed-8564-7097358d02f4-kube-api-access-4sps7\") pod \"whisker-56975d45d4-jrkcs\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.495114 kubelet[2787]: I0416 23:56:24.495101 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-ca-bundle\") pod \"whisker-56975d45d4-jrkcs\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.495323 kubelet[2787]: I0416 23:56:24.495219 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmm7\" (UniqueName: \"kubernetes.io/projected/788ceb96-f215-499c-9d50-ef5dc95ae426-kube-api-access-xsmm7\") pod \"calico-apiserver-564655bcc6-zq72f\" (UID: \"788ceb96-f215-499c-9d50-ef5dc95ae426\") " pod="calico-system/calico-apiserver-564655bcc6-zq72f" Apr 16 23:56:24.495323 kubelet[2787]: I0416 23:56:24.495290 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d8fc39f-6b99-42c2-8941-a9e5c4ae0512-config\") pod \"goldmane-5b85766d88-wdtnt\" (UID: \"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512\") " pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.495475 kubelet[2787]: I0416 23:56:24.495409 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-backend-key-pair\") pod \"whisker-56975d45d4-jrkcs\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.495475 kubelet[2787]: I0416 23:56:24.495446 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d809911-9317-452b-a955-9e9b28c4a3f0-config-volume\") pod \"coredns-674b8bbfcf-lvjxk\" (UID: \"0d809911-9317-452b-a955-9e9b28c4a3f0\") " pod="kube-system/coredns-674b8bbfcf-lvjxk" Apr 16 23:56:24.495696 kubelet[2787]: I0416 23:56:24.495618 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef4b04a-66dc-4a7a-9538-58d3c30cdae3-tigera-ca-bundle\") pod \"calico-kube-controllers-57bd645688-ksxcn\" (UID: \"0ef4b04a-66dc-4a7a-9538-58d3c30cdae3\") " pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" Apr 16 23:56:24.495788 kubelet[2787]: I0416 23:56:24.495769 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkwn\" (UniqueName: \"kubernetes.io/projected/0ef4b04a-66dc-4a7a-9538-58d3c30cdae3-kube-api-access-jmkwn\") pod \"calico-kube-controllers-57bd645688-ksxcn\" (UID: \"0ef4b04a-66dc-4a7a-9538-58d3c30cdae3\") " pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" Apr 16 23:56:24.496146 kubelet[2787]: I0416 23:56:24.495903 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d8fc39f-6b99-42c2-8941-a9e5c4ae0512-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-wdtnt\" (UID: \"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512\") " pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.501885 systemd[1]: Created slice kubepods-besteffort-podb7d33848_0070_4073_8ca8_c3a5995310bd.slice - libcontainer container kubepods-besteffort-podb7d33848_0070_4073_8ca8_c3a5995310bd.slice. Apr 16 23:56:24.511067 systemd[1]: Created slice kubepods-besteffort-pod5d8fc39f_6b99_42c2_8941_a9e5c4ae0512.slice - libcontainer container kubepods-besteffort-pod5d8fc39f_6b99_42c2_8941_a9e5c4ae0512.slice. Apr 16 23:56:24.515472 systemd[1]: Created slice kubepods-besteffort-pod9ffd77c1_76d6_4bed_8564_7097358d02f4.slice - libcontainer container kubepods-besteffort-pod9ffd77c1_76d6_4bed_8564_7097358d02f4.slice. Apr 16 23:56:24.771366 containerd[1632]: time="2026-04-16T23:56:24.771309445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pclrw,Uid:6c71639d-8d3c-4690-91b5-1523b8296aba,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:24.792503 containerd[1632]: time="2026-04-16T23:56:24.791688666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvjxk,Uid:0d809911-9317-452b-a955-9e9b28c4a3f0,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:24.795709 containerd[1632]: time="2026-04-16T23:56:24.794207893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bd645688-ksxcn,Uid:0ef4b04a-66dc-4a7a-9538-58d3c30cdae3,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:24.809604 containerd[1632]: time="2026-04-16T23:56:24.809386963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-rnsgt,Uid:b7d33848-0070-4073-8ca8-c3a5995310bd,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:24.809604 containerd[1632]: time="2026-04-16T23:56:24.809496887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-zq72f,Uid:788ceb96-f215-499c-9d50-ef5dc95ae426,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:24.819242 containerd[1632]: time="2026-04-16T23:56:24.819194379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wdtnt,Uid:5d8fc39f-6b99-42c2-8941-a9e5c4ae0512,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:24.819738 containerd[1632]: time="2026-04-16T23:56:24.819625429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56975d45d4-jrkcs,Uid:9ffd77c1-76d6-4bed-8564-7097358d02f4,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:24.962513 containerd[1632]: time="2026-04-16T23:56:24.962386930Z" level=error msg="Failed to destroy network for sandbox \"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.965671 containerd[1632]: time="2026-04-16T23:56:24.965644471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pclrw,Uid:6c71639d-8d3c-4690-91b5-1523b8296aba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.966220 kubelet[2787]: E0416 23:56:24.965999 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.966220 kubelet[2787]: E0416 23:56:24.966056 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pclrw" Apr 16 23:56:24.966220 kubelet[2787]: E0416 23:56:24.966074 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pclrw" Apr 16 23:56:24.966312 kubelet[2787]: E0416 23:56:24.966118 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pclrw_kube-system(6c71639d-8d3c-4690-91b5-1523b8296aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pclrw_kube-system(6c71639d-8d3c-4690-91b5-1523b8296aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20d4145c79a60da5f587a8cc0053ade8aa7424da339ad1bc020efc9d7388a4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pclrw" podUID="6c71639d-8d3c-4690-91b5-1523b8296aba" Apr 16 23:56:24.972766 containerd[1632]: time="2026-04-16T23:56:24.972688302Z" level=error msg="Failed to destroy network for sandbox \"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.975483 containerd[1632]: time="2026-04-16T23:56:24.975462592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56975d45d4-jrkcs,Uid:9ffd77c1-76d6-4bed-8564-7097358d02f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.976190 kubelet[2787]: E0416 23:56:24.975642 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.976190 kubelet[2787]: E0416 23:56:24.975677 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.976190 kubelet[2787]: E0416 23:56:24.975692 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56975d45d4-jrkcs" Apr 16 23:56:24.976281 kubelet[2787]: E0416 23:56:24.975725 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56975d45d4-jrkcs_calico-system(9ffd77c1-76d6-4bed-8564-7097358d02f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56975d45d4-jrkcs_calico-system(9ffd77c1-76d6-4bed-8564-7097358d02f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e66326515fc4eab600801df8b2fb455991e077d92770d296fa66eb90d548b4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56975d45d4-jrkcs" podUID="9ffd77c1-76d6-4bed-8564-7097358d02f4" Apr 16 23:56:24.988401 containerd[1632]: time="2026-04-16T23:56:24.988299191Z" level=error msg="Failed to destroy network for sandbox \"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.990038 containerd[1632]: time="2026-04-16T23:56:24.990017517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wdtnt,Uid:5d8fc39f-6b99-42c2-8941-a9e5c4ae0512,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.991100 kubelet[2787]: E0416 23:56:24.991039 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:24.991286 kubelet[2787]: E0416 23:56:24.991119 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.991286 kubelet[2787]: E0416 23:56:24.991154 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-wdtnt" Apr 16 23:56:24.991286 kubelet[2787]: E0416 23:56:24.991194 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-wdtnt_calico-system(5d8fc39f-6b99-42c2-8941-a9e5c4ae0512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-wdtnt_calico-system(5d8fc39f-6b99-42c2-8941-a9e5c4ae0512)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"890466fcd20cf2ccabeb7f16fef3991433e2bd106927be076b75b23db075e64d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-wdtnt" podUID="5d8fc39f-6b99-42c2-8941-a9e5c4ae0512" Apr 16 23:56:25.003971 systemd[1]: Created slice kubepods-besteffort-pod03d6db92_84bd_442c_8aee_ce624ac6a17d.slice - libcontainer container kubepods-besteffort-pod03d6db92_84bd_442c_8aee_ce624ac6a17d.slice. Apr 16 23:56:25.008986 containerd[1632]: time="2026-04-16T23:56:25.008902235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbzc2,Uid:03d6db92-84bd-442c-8aee-ce624ac6a17d,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:25.009630 containerd[1632]: time="2026-04-16T23:56:25.009609239Z" level=error msg="Failed to destroy network for sandbox \"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.011713 containerd[1632]: time="2026-04-16T23:56:25.011696947Z" level=error msg="Failed to destroy network for sandbox \"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.012322 containerd[1632]: time="2026-04-16T23:56:25.012296506Z" level=error msg="Failed to destroy network for sandbox \"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.013582 containerd[1632]: time="2026-04-16T23:56:25.013424033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-rnsgt,Uid:b7d33848-0070-4073-8ca8-c3a5995310bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.013647 kubelet[2787]: E0416 23:56:25.013621 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.013683 kubelet[2787]: E0416 23:56:25.013651 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-564655bcc6-rnsgt" Apr 16 23:56:25.013683 kubelet[2787]: E0416 23:56:25.013667 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-564655bcc6-rnsgt" Apr 16 23:56:25.013734 kubelet[2787]: E0416 23:56:25.013706 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564655bcc6-rnsgt_calico-system(b7d33848-0070-4073-8ca8-c3a5995310bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564655bcc6-rnsgt_calico-system(b7d33848-0070-4073-8ca8-c3a5995310bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"087197416ab7d425925f4d69afbf28fba5840be8aeebc6c4befc06734ca094d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-564655bcc6-rnsgt" podUID="b7d33848-0070-4073-8ca8-c3a5995310bd" Apr 16 23:56:25.016426 containerd[1632]: time="2026-04-16T23:56:25.016383824Z" level=error msg="Failed to destroy network for sandbox \"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.016693 containerd[1632]: time="2026-04-16T23:56:25.016504287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-zq72f,Uid:788ceb96-f215-499c-9d50-ef5dc95ae426,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.016984 kubelet[2787]: E0416 23:56:25.016932 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.017026 kubelet[2787]: E0416 23:56:25.016990 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-564655bcc6-zq72f" Apr 16 23:56:25.017026 kubelet[2787]: E0416 23:56:25.017003 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-564655bcc6-zq72f" Apr 16 23:56:25.017064 kubelet[2787]: E0416 23:56:25.017029 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564655bcc6-zq72f_calico-system(788ceb96-f215-499c-9d50-ef5dc95ae426)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564655bcc6-zq72f_calico-system(788ceb96-f215-499c-9d50-ef5dc95ae426)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122f17224df520f1f26f1e9f2a1d74c3a56ed6a75ea6dad7905245f7bf0ccb33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-564655bcc6-zq72f" podUID="788ceb96-f215-499c-9d50-ef5dc95ae426" Apr 16 23:56:25.017825 containerd[1632]: time="2026-04-16T23:56:25.017734555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bd645688-ksxcn,Uid:0ef4b04a-66dc-4a7a-9538-58d3c30cdae3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.017917 kubelet[2787]: E0416 23:56:25.017893 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.017943 kubelet[2787]: E0416 23:56:25.017919 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" Apr 16 23:56:25.017943 kubelet[2787]: E0416 23:56:25.017931 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" Apr 16 23:56:25.018107 kubelet[2787]: E0416 23:56:25.017961 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57bd645688-ksxcn_calico-system(0ef4b04a-66dc-4a7a-9538-58d3c30cdae3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57bd645688-ksxcn_calico-system(0ef4b04a-66dc-4a7a-9538-58d3c30cdae3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68fd4c86687736d5c3f1bd9fb6bc43c0263bacf6c42dfab26911cc840b65bdd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" podUID="0ef4b04a-66dc-4a7a-9538-58d3c30cdae3" Apr 16 23:56:25.018836 containerd[1632]: time="2026-04-16T23:56:25.018803270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvjxk,Uid:0d809911-9317-452b-a955-9e9b28c4a3f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.019447 kubelet[2787]: E0416 23:56:25.019410 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.019896 kubelet[2787]: E0416 23:56:25.019873 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lvjxk" Apr 16 23:56:25.019896 kubelet[2787]: E0416 23:56:25.019893 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lvjxk" Apr 16 23:56:25.019956 kubelet[2787]: E0416 23:56:25.019915 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lvjxk_kube-system(0d809911-9317-452b-a955-9e9b28c4a3f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lvjxk_kube-system(0d809911-9317-452b-a955-9e9b28c4a3f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"005481ab04b00562bff8cab7717344ff1c76ce7b86e75b65fb71d1fb975253e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lvjxk" podUID="0d809911-9317-452b-a955-9e9b28c4a3f0" Apr 16 23:56:25.057445 containerd[1632]: time="2026-04-16T23:56:25.057341674Z" level=error msg="Failed to destroy network for sandbox \"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.059546 containerd[1632]: time="2026-04-16T23:56:25.059504249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbzc2,Uid:03d6db92-84bd-442c-8aee-ce624ac6a17d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.059711 kubelet[2787]: E0416 23:56:25.059627 2787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:56:25.059711 kubelet[2787]: E0416 23:56:25.059679 2787 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:25.059711 kubelet[2787]: E0416 23:56:25.059696 2787 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbzc2" Apr 16 23:56:25.059806 kubelet[2787]: E0416 23:56:25.059734 2787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbzc2_calico-system(03d6db92-84bd-442c-8aee-ce624ac6a17d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbzc2_calico-system(03d6db92-84bd-442c-8aee-ce624ac6a17d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56a09d4fb48f1a03c171d87237aa70b67b8b17583ad375d584246f8595ae0007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbzc2" podUID="03d6db92-84bd-442c-8aee-ce624ac6a17d" Apr 16 23:56:25.109379 containerd[1632]: time="2026-04-16T23:56:25.109333741Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:56:25.118312 containerd[1632]: time="2026-04-16T23:56:25.118278233Z" level=info msg="Container a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:25.124789 containerd[1632]: time="2026-04-16T23:56:25.124759872Z" level=info msg="CreateContainer within sandbox \"c7861a13e5613996c64592ec6781bf7a276a08e69f7e8ce0e8480f8feeb666eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a\"" Apr 16 23:56:25.127941 containerd[1632]: time="2026-04-16T23:56:25.127888480Z" level=info msg="StartContainer for \"a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a\"" Apr 16 23:56:25.129909 containerd[1632]: time="2026-04-16T23:56:25.129667077Z" level=info msg="connecting to shim a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a" address="unix:///run/containerd/s/063a8ceec32c8fa359257b6c9d6e41009e9552517ce6ed2e9249a14de154a908" protocol=ttrpc version=3 Apr 16 23:56:25.153242 systemd[1]: Started cri-containerd-a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a.scope - libcontainer container a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a. Apr 16 23:56:25.218932 containerd[1632]: time="2026-04-16T23:56:25.218869854Z" level=info msg="StartContainer for \"a74bbc33c6f6365e1158b05762938b448dd1b9343c76c0000a151a813262171a\" returns successfully" Apr 16 23:56:25.405049 kubelet[2787]: I0416 23:56:25.404940 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sps7\" (UniqueName: \"kubernetes.io/projected/9ffd77c1-76d6-4bed-8564-7097358d02f4-kube-api-access-4sps7\") pod \"9ffd77c1-76d6-4bed-8564-7097358d02f4\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " Apr 16 23:56:25.405049 kubelet[2787]: I0416 23:56:25.404987 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-nginx-config\") pod \"9ffd77c1-76d6-4bed-8564-7097358d02f4\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " Apr 16 23:56:25.405049 kubelet[2787]: I0416 23:56:25.405004 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-ca-bundle\") pod \"9ffd77c1-76d6-4bed-8564-7097358d02f4\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " Apr 16 23:56:25.405049 kubelet[2787]: I0416 23:56:25.405016 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-backend-key-pair\") pod \"9ffd77c1-76d6-4bed-8564-7097358d02f4\" (UID: \"9ffd77c1-76d6-4bed-8564-7097358d02f4\") " Apr 16 23:56:25.405681 kubelet[2787]: I0416 23:56:25.405654 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "9ffd77c1-76d6-4bed-8564-7097358d02f4" (UID: "9ffd77c1-76d6-4bed-8564-7097358d02f4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:56:25.406281 kubelet[2787]: I0416 23:56:25.406176 2787 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-nginx-config\") on node \"ci-4459-2-4-n-391826f4f6\" DevicePath \"\"" Apr 16 23:56:25.406281 kubelet[2787]: I0416 23:56:25.406200 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ffd77c1-76d6-4bed-8564-7097358d02f4" (UID: "9ffd77c1-76d6-4bed-8564-7097358d02f4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:56:25.409559 kubelet[2787]: I0416 23:56:25.409538 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ffd77c1-76d6-4bed-8564-7097358d02f4" (UID: "9ffd77c1-76d6-4bed-8564-7097358d02f4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:56:25.409823 kubelet[2787]: I0416 23:56:25.409790 2787 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffd77c1-76d6-4bed-8564-7097358d02f4-kube-api-access-4sps7" (OuterVolumeSpecName: "kube-api-access-4sps7") pod "9ffd77c1-76d6-4bed-8564-7097358d02f4" (UID: "9ffd77c1-76d6-4bed-8564-7097358d02f4"). InnerVolumeSpecName "kube-api-access-4sps7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:56:25.507046 kubelet[2787]: I0416 23:56:25.506879 2787 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-ca-bundle\") on node \"ci-4459-2-4-n-391826f4f6\" DevicePath \"\"" Apr 16 23:56:25.507046 kubelet[2787]: I0416 23:56:25.506925 2787 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ffd77c1-76d6-4bed-8564-7097358d02f4-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-391826f4f6\" DevicePath \"\"" Apr 16 23:56:25.507046 kubelet[2787]: I0416 23:56:25.506946 2787 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4sps7\" (UniqueName: \"kubernetes.io/projected/9ffd77c1-76d6-4bed-8564-7097358d02f4-kube-api-access-4sps7\") on node \"ci-4459-2-4-n-391826f4f6\" DevicePath \"\"" Apr 16 23:56:25.775770 systemd[1]: run-netns-cni\x2d2299ef03\x2d19a6\x2dda0c\x2db419\x2d4d2e3b3010a2.mount: Deactivated successfully. Apr 16 23:56:25.775947 systemd[1]: run-netns-cni\x2d73ca76b9\x2df74b\x2d6d76\x2d90fd\x2ddee35a59ae27.mount: Deactivated successfully. Apr 16 23:56:25.776069 systemd[1]: run-netns-cni\x2d7df77e6c\x2d40cb\x2d4ebb\x2d3cce\x2dba3a09994df9.mount: Deactivated successfully. Apr 16 23:56:25.776659 systemd[1]: run-netns-cni\x2d825a21c1\x2d9adc\x2d7c27\x2d46ea\x2d38fa62f67d0a.mount: Deactivated successfully. Apr 16 23:56:25.776888 systemd[1]: var-lib-kubelet-pods-9ffd77c1\x2d76d6\x2d4bed\x2d8564\x2d7097358d02f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4sps7.mount: Deactivated successfully. Apr 16 23:56:25.777173 systemd[1]: var-lib-kubelet-pods-9ffd77c1\x2d76d6\x2d4bed\x2d8564\x2d7097358d02f4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:56:26.125590 systemd[1]: Removed slice kubepods-besteffort-pod9ffd77c1_76d6_4bed_8564_7097358d02f4.slice - libcontainer container kubepods-besteffort-pod9ffd77c1_76d6_4bed_8564_7097358d02f4.slice. Apr 16 23:56:26.144629 kubelet[2787]: I0416 23:56:26.144170 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zl4h8" podStartSLOduration=3.030771416 podStartE2EDuration="15.144116575s" podCreationTimestamp="2026-04-16 23:56:11 +0000 UTC" firstStartedPulling="2026-04-16 23:56:11.643737734 +0000 UTC m=+16.743429864" lastFinishedPulling="2026-04-16 23:56:23.757082882 +0000 UTC m=+28.856775023" observedRunningTime="2026-04-16 23:56:26.143871567 +0000 UTC m=+31.243563728" watchObservedRunningTime="2026-04-16 23:56:26.144116575 +0000 UTC m=+31.243808746" Apr 16 23:56:26.213913 systemd[1]: Created slice kubepods-besteffort-pod9108dd67_35d7_46fa_8b9c_ba021f0caad0.slice - libcontainer container kubepods-besteffort-pod9108dd67_35d7_46fa_8b9c_ba021f0caad0.slice. Apr 16 23:56:26.315318 kubelet[2787]: I0416 23:56:26.315247 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9108dd67-35d7-46fa-8b9c-ba021f0caad0-whisker-backend-key-pair\") pod \"whisker-79d8cfc65c-vrkjq\" (UID: \"9108dd67-35d7-46fa-8b9c-ba021f0caad0\") " pod="calico-system/whisker-79d8cfc65c-vrkjq" Apr 16 23:56:26.315318 kubelet[2787]: I0416 23:56:26.315286 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9108dd67-35d7-46fa-8b9c-ba021f0caad0-whisker-ca-bundle\") pod \"whisker-79d8cfc65c-vrkjq\" (UID: \"9108dd67-35d7-46fa-8b9c-ba021f0caad0\") " pod="calico-system/whisker-79d8cfc65c-vrkjq" Apr 16 23:56:26.315318 kubelet[2787]: I0416 23:56:26.315298 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9108dd67-35d7-46fa-8b9c-ba021f0caad0-nginx-config\") pod \"whisker-79d8cfc65c-vrkjq\" (UID: \"9108dd67-35d7-46fa-8b9c-ba021f0caad0\") " pod="calico-system/whisker-79d8cfc65c-vrkjq" Apr 16 23:56:26.315318 kubelet[2787]: I0416 23:56:26.315314 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8knb\" (UniqueName: \"kubernetes.io/projected/9108dd67-35d7-46fa-8b9c-ba021f0caad0-kube-api-access-h8knb\") pod \"whisker-79d8cfc65c-vrkjq\" (UID: \"9108dd67-35d7-46fa-8b9c-ba021f0caad0\") " pod="calico-system/whisker-79d8cfc65c-vrkjq" Apr 16 23:56:26.520750 containerd[1632]: time="2026-04-16T23:56:26.520640340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d8cfc65c-vrkjq,Uid:9108dd67-35d7-46fa-8b9c-ba021f0caad0,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:26.672372 systemd-networkd[1501]: cali99309e93716: Link UP Apr 16 23:56:26.672893 systemd-networkd[1501]: cali99309e93716: Gained carrier Apr 16 23:56:26.687992 containerd[1632]: 2026-04-16 23:56:26.565 [ERROR][3875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:26.687992 containerd[1632]: 2026-04-16 23:56:26.591 [INFO][3875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0 whisker-79d8cfc65c- calico-system 9108dd67-35d7-46fa-8b9c-ba021f0caad0 869 0 2026-04-16 23:56:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79d8cfc65c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 whisker-79d8cfc65c-vrkjq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali99309e93716 [] [] }} ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-" Apr 16 23:56:26.687992 containerd[1632]: 2026-04-16 23:56:26.591 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.687992 containerd[1632]: 2026-04-16 23:56:26.626 [INFO][3887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" HandleID="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Workload="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.634 [INFO][3887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" HandleID="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Workload="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"whisker-79d8cfc65c-vrkjq", "timestamp":"2026-04-16 23:56:26.626526242 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000260580)} Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.634 [INFO][3887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.634 [INFO][3887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.634 [INFO][3887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.637 [INFO][3887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.641 [INFO][3887] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.646 [INFO][3887] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.647 [INFO][3887] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688176 containerd[1632]: 2026-04-16 23:56:26.649 [INFO][3887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.649 [INFO][3887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.650 [INFO][3887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1 Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.653 [INFO][3887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.657 [INFO][3887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.65/26] block=192.168.5.64/26 handle="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.658 [INFO][3887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.65/26] handle="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.658 [INFO][3887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:26.688346 containerd[1632]: 2026-04-16 23:56:26.658 [INFO][3887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.65/26] IPv6=[] ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" HandleID="k8s-pod-network.6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Workload="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.688442 containerd[1632]: 2026-04-16 23:56:26.662 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0", GenerateName:"whisker-79d8cfc65c-", Namespace:"calico-system", SelfLink:"", UID:"9108dd67-35d7-46fa-8b9c-ba021f0caad0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d8cfc65c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"whisker-79d8cfc65c-vrkjq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99309e93716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:26.688442 containerd[1632]: 2026-04-16 23:56:26.662 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.65/32] ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.688500 containerd[1632]: 2026-04-16 23:56:26.662 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99309e93716 ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.688500 containerd[1632]: 2026-04-16 23:56:26.672 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.688530 containerd[1632]: 2026-04-16 23:56:26.672 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0", GenerateName:"whisker-79d8cfc65c-", Namespace:"calico-system", SelfLink:"", UID:"9108dd67-35d7-46fa-8b9c-ba021f0caad0", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d8cfc65c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1", Pod:"whisker-79d8cfc65c-vrkjq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali99309e93716", MAC:"de:5f:05:6c:0d:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:26.690957 containerd[1632]: 2026-04-16 23:56:26.681 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" Namespace="calico-system" Pod="whisker-79d8cfc65c-vrkjq" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-whisker--79d8cfc65c--vrkjq-eth0" Apr 16 23:56:26.742439 containerd[1632]: time="2026-04-16T23:56:26.742171187Z" level=info msg="connecting to shim 6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1" address="unix:///run/containerd/s/db8974bda23d744fefc61ecc455ab8203fb6040c0ebaba98d6d7754b450a8ef0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:26.786021 systemd[1]: Started cri-containerd-6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1.scope - libcontainer container 6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1. Apr 16 23:56:26.844060 containerd[1632]: time="2026-04-16T23:56:26.843996117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d8cfc65c-vrkjq,Uid:9108dd67-35d7-46fa-8b9c-ba021f0caad0,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1\"" Apr 16 23:56:26.845905 containerd[1632]: time="2026-04-16T23:56:26.845880944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:56:26.997311 kubelet[2787]: I0416 23:56:26.997254 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffd77c1-76d6-4bed-8564-7097358d02f4" path="/var/lib/kubelet/pods/9ffd77c1-76d6-4bed-8564-7097358d02f4/volumes" Apr 16 23:56:27.115069 kubelet[2787]: I0416 23:56:27.114567 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:28.206547 systemd-networkd[1501]: cali99309e93716: Gained IPv6LL Apr 16 23:56:28.708914 kubelet[2787]: I0416 23:56:28.708858 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:29.174407 containerd[1632]: time="2026-04-16T23:56:29.174368000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:29.175431 containerd[1632]: time="2026-04-16T23:56:29.175287121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 23:56:29.176294 containerd[1632]: time="2026-04-16T23:56:29.176277255Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:29.178009 containerd[1632]: time="2026-04-16T23:56:29.177987998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:29.178436 containerd[1632]: time="2026-04-16T23:56:29.178419111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.332512779s" Apr 16 23:56:29.178491 containerd[1632]: time="2026-04-16T23:56:29.178481764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 23:56:29.182035 containerd[1632]: time="2026-04-16T23:56:29.182009297Z" level=info msg="CreateContainer within sandbox \"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:56:29.188492 containerd[1632]: time="2026-04-16T23:56:29.188460279Z" level=info msg="Container e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:29.202500 containerd[1632]: time="2026-04-16T23:56:29.202473373Z" level=info msg="CreateContainer within sandbox \"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c\"" Apr 16 23:56:29.203699 containerd[1632]: time="2026-04-16T23:56:29.202985160Z" level=info msg="StartContainer for \"e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c\"" Apr 16 23:56:29.204489 containerd[1632]: time="2026-04-16T23:56:29.204469631Z" level=info msg="connecting to shim e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c" address="unix:///run/containerd/s/db8974bda23d744fefc61ecc455ab8203fb6040c0ebaba98d6d7754b450a8ef0" protocol=ttrpc version=3 Apr 16 23:56:29.227233 systemd[1]: Started cri-containerd-e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c.scope - libcontainer container e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c. Apr 16 23:56:29.268530 containerd[1632]: time="2026-04-16T23:56:29.268503582Z" level=info msg="StartContainer for \"e4ec0cfabc0545ea4def3c07ebff1cca0f1a349d5861408fadbfba8feca58a6c\" returns successfully" Apr 16 23:56:29.271422 containerd[1632]: time="2026-04-16T23:56:29.271361567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:56:30.015194 systemd[1]: Started sshd@8-77.42.25.117:22-46.59.97.98:47758.service - OpenSSH per-connection server daemon (46.59.97.98:47758). Apr 16 23:56:31.454750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130507016.mount: Deactivated successfully. Apr 16 23:56:31.470347 containerd[1632]: time="2026-04-16T23:56:31.470313478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:31.471240 containerd[1632]: time="2026-04-16T23:56:31.471219375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 23:56:31.472190 containerd[1632]: time="2026-04-16T23:56:31.472172189Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:31.474219 containerd[1632]: time="2026-04-16T23:56:31.473750721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:31.474219 containerd[1632]: time="2026-04-16T23:56:31.474116356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.202735854s" Apr 16 23:56:31.474219 containerd[1632]: time="2026-04-16T23:56:31.474154361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 23:56:31.477700 containerd[1632]: time="2026-04-16T23:56:31.477677138Z" level=info msg="CreateContainer within sandbox \"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:56:31.485733 containerd[1632]: time="2026-04-16T23:56:31.485226051Z" level=info msg="Container 75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:31.495435 containerd[1632]: time="2026-04-16T23:56:31.495409272Z" level=info msg="CreateContainer within sandbox \"6e9f9a524957081fce9dd1ab4d9e78f146c14cf76a5db04d07e394114f0107b1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf\"" Apr 16 23:56:31.495960 containerd[1632]: time="2026-04-16T23:56:31.495944013Z" level=info msg="StartContainer for \"75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf\"" Apr 16 23:56:31.496719 containerd[1632]: time="2026-04-16T23:56:31.496689086Z" level=info msg="connecting to shim 75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf" address="unix:///run/containerd/s/db8974bda23d744fefc61ecc455ab8203fb6040c0ebaba98d6d7754b450a8ef0" protocol=ttrpc version=3 Apr 16 23:56:31.515320 systemd[1]: Started cri-containerd-75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf.scope - libcontainer container 75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf. Apr 16 23:56:31.556604 containerd[1632]: time="2026-04-16T23:56:31.556492310Z" level=info msg="StartContainer for \"75f17dbd092a214036710b9ebeb5aa2fb9c6f32d68042e4bb7d28be49a87c6cf\" returns successfully" Apr 16 23:56:31.804918 sshd[4171]: Invalid user supervisor from 46.59.97.98 port 47758 Apr 16 23:56:32.116495 sshd[4171]: PAM user mismatch Apr 16 23:56:32.119919 systemd[1]: sshd@8-77.42.25.117:22-46.59.97.98:47758.service: Deactivated successfully. Apr 16 23:56:32.166470 kubelet[2787]: I0416 23:56:32.164540 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79d8cfc65c-vrkjq" podStartSLOduration=1.5352280029999998 podStartE2EDuration="6.164516719s" podCreationTimestamp="2026-04-16 23:56:26 +0000 UTC" firstStartedPulling="2026-04-16 23:56:26.845529896 +0000 UTC m=+31.945222027" lastFinishedPulling="2026-04-16 23:56:31.474818602 +0000 UTC m=+36.574510743" observedRunningTime="2026-04-16 23:56:32.154372559 +0000 UTC m=+37.254064730" watchObservedRunningTime="2026-04-16 23:56:32.164516719 +0000 UTC m=+37.264208890" Apr 16 23:56:34.207972 systemd[1]: Started sshd@9-77.42.25.117:22-176.100.124.153:36950.service - OpenSSH per-connection server daemon (176.100.124.153:36950). Apr 16 23:56:35.920717 sshd[4296]: Invalid user supervisor from 176.100.124.153 port 36950 Apr 16 23:56:35.996651 containerd[1632]: time="2026-04-16T23:56:35.996159585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbzc2,Uid:03d6db92-84bd-442c-8aee-ce624ac6a17d,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:35.997683 containerd[1632]: time="2026-04-16T23:56:35.996950539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bd645688-ksxcn,Uid:0ef4b04a-66dc-4a7a-9538-58d3c30cdae3,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:35.997683 containerd[1632]: time="2026-04-16T23:56:35.997242348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-zq72f,Uid:788ceb96-f215-499c-9d50-ef5dc95ae426,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:36.166481 systemd-networkd[1501]: cali48e3e555c36: Link UP Apr 16 23:56:36.166647 systemd-networkd[1501]: cali48e3e555c36: Gained carrier Apr 16 23:56:36.181520 containerd[1632]: 2026-04-16 23:56:36.068 [ERROR][4345] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:36.181520 containerd[1632]: 2026-04-16 23:56:36.078 [INFO][4345] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0 calico-kube-controllers-57bd645688- calico-system 0ef4b04a-66dc-4a7a-9538-58d3c30cdae3 814 0 2026-04-16 23:56:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57bd645688 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 calico-kube-controllers-57bd645688-ksxcn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali48e3e555c36 [] [] }} ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-" Apr 16 23:56:36.181520 containerd[1632]: 2026-04-16 23:56:36.078 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.181520 containerd[1632]: 2026-04-16 23:56:36.113 [INFO][4374] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" HandleID="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.123 [INFO][4374] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" HandleID="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"calico-kube-controllers-57bd645688-ksxcn", "timestamp":"2026-04-16 23:56:36.113487692 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036f600)} Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.123 [INFO][4374] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.123 [INFO][4374] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.123 [INFO][4374] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.126 [INFO][4374] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.145 [INFO][4374] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.150 [INFO][4374] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.151 [INFO][4374] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.181917 containerd[1632]: 2026-04-16 23:56:36.152 [INFO][4374] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.153 [INFO][4374] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.154 [INFO][4374] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.157 [INFO][4374] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4374] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.66/26] block=192.168.5.64/26 handle="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4374] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.66/26] handle="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4374] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:36.182332 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4374] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.66/26] IPv6=[] ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" HandleID="k8s-pod-network.a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.182487 containerd[1632]: 2026-04-16 23:56:36.163 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0", GenerateName:"calico-kube-controllers-57bd645688-", Namespace:"calico-system", SelfLink:"", UID:"0ef4b04a-66dc-4a7a-9538-58d3c30cdae3", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bd645688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"calico-kube-controllers-57bd645688-ksxcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48e3e555c36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.182569 containerd[1632]: 2026-04-16 23:56:36.163 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.66/32] ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.182569 containerd[1632]: 2026-04-16 23:56:36.163 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48e3e555c36 ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.182569 containerd[1632]: 2026-04-16 23:56:36.166 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.182619 containerd[1632]: 2026-04-16 23:56:36.167 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0", GenerateName:"calico-kube-controllers-57bd645688-", Namespace:"calico-system", SelfLink:"", UID:"0ef4b04a-66dc-4a7a-9538-58d3c30cdae3", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57bd645688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee", Pod:"calico-kube-controllers-57bd645688-ksxcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48e3e555c36", MAC:"86:a6:17:1e:2c:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.182674 containerd[1632]: 2026-04-16 23:56:36.179 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" Namespace="calico-system" Pod="calico-kube-controllers-57bd645688-ksxcn" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--kube--controllers--57bd645688--ksxcn-eth0" Apr 16 23:56:36.202381 containerd[1632]: time="2026-04-16T23:56:36.202305461Z" level=info msg="connecting to shim a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee" address="unix:///run/containerd/s/5adc17723f27a91573ae973234b1814975309e68d8f5cfcccd45a65dd4e188a1" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:36.225304 systemd[1]: Started cri-containerd-a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee.scope - libcontainer container a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee. Apr 16 23:56:36.244361 sshd[4296]: PAM user mismatch Apr 16 23:56:36.247237 systemd[1]: sshd@9-77.42.25.117:22-176.100.124.153:36950.service: Deactivated successfully. Apr 16 23:56:36.274034 containerd[1632]: time="2026-04-16T23:56:36.273980020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57bd645688-ksxcn,Uid:0ef4b04a-66dc-4a7a-9538-58d3c30cdae3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee\"" Apr 16 23:56:36.276333 containerd[1632]: time="2026-04-16T23:56:36.276211863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:56:36.281250 systemd-networkd[1501]: cali567f2fc8c34: Link UP Apr 16 23:56:36.281424 systemd-networkd[1501]: cali567f2fc8c34: Gained carrier Apr 16 23:56:36.293273 containerd[1632]: 2026-04-16 23:56:36.068 [ERROR][4338] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:36.293273 containerd[1632]: 2026-04-16 23:56:36.082 [INFO][4338] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0 csi-node-driver- calico-system 03d6db92-84bd-442c-8aee-ce624ac6a17d 683 0 2026-04-16 23:56:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 csi-node-driver-nbzc2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali567f2fc8c34 [] [] }} ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-" Apr 16 23:56:36.293273 containerd[1632]: 2026-04-16 23:56:36.082 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.293273 containerd[1632]: 2026-04-16 23:56:36.119 [INFO][4379] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" HandleID="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Workload="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.124 [INFO][4379] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" HandleID="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Workload="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"csi-node-driver-nbzc2", "timestamp":"2026-04-16 23:56:36.11961462 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.124 [INFO][4379] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4379] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.161 [INFO][4379] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.227 [INFO][4379] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.246 [INFO][4379] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.252 [INFO][4379] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.253 [INFO][4379] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293539 containerd[1632]: 2026-04-16 23:56:36.255 [INFO][4379] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.255 [INFO][4379] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.257 [INFO][4379] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56 Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.266 [INFO][4379] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.271 [INFO][4379] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.67/26] block=192.168.5.64/26 handle="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.271 [INFO][4379] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.67/26] handle="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.272 [INFO][4379] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:36.293796 containerd[1632]: 2026-04-16 23:56:36.272 [INFO][4379] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.67/26] IPv6=[] ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" HandleID="k8s-pod-network.eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Workload="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.293949 containerd[1632]: 2026-04-16 23:56:36.276 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03d6db92-84bd-442c-8aee-ce624ac6a17d", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"csi-node-driver-nbzc2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali567f2fc8c34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.293993 containerd[1632]: 2026-04-16 23:56:36.277 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.67/32] ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.293993 containerd[1632]: 2026-04-16 23:56:36.277 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali567f2fc8c34 ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.293993 containerd[1632]: 2026-04-16 23:56:36.281 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.294053 containerd[1632]: 2026-04-16 23:56:36.281 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03d6db92-84bd-442c-8aee-ce624ac6a17d", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56", Pod:"csi-node-driver-nbzc2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali567f2fc8c34", MAC:"ee:b2:8f:73:ee:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.294100 containerd[1632]: 2026-04-16 23:56:36.291 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" Namespace="calico-system" Pod="csi-node-driver-nbzc2" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-csi--node--driver--nbzc2-eth0" Apr 16 23:56:36.310190 containerd[1632]: time="2026-04-16T23:56:36.309739015Z" level=info msg="connecting to shim eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56" address="unix:///run/containerd/s/b3bc0363cfceea69722b852b61e3891bbf7d49de866b255ed26adcdf768572aa" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:36.329248 systemd[1]: Started cri-containerd-eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56.scope - libcontainer container eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56. Apr 16 23:56:36.353744 containerd[1632]: time="2026-04-16T23:56:36.353706924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbzc2,Uid:03d6db92-84bd-442c-8aee-ce624ac6a17d,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56\"" Apr 16 23:56:36.375233 systemd-networkd[1501]: cali4a146f1c47b: Link UP Apr 16 23:56:36.375463 systemd-networkd[1501]: cali4a146f1c47b: Gained carrier Apr 16 23:56:36.388014 containerd[1632]: 2026-04-16 23:56:36.080 [ERROR][4349] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:36.388014 containerd[1632]: 2026-04-16 23:56:36.090 [INFO][4349] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0 calico-apiserver-564655bcc6- calico-system 788ceb96-f215-499c-9d50-ef5dc95ae426 815 0 2026-04-16 23:56:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564655bcc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 calico-apiserver-564655bcc6-zq72f eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4a146f1c47b [] [] }} ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-" Apr 16 23:56:36.388014 containerd[1632]: 2026-04-16 23:56:36.091 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388014 containerd[1632]: 2026-04-16 23:56:36.124 [INFO][4385] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" HandleID="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.129 [INFO][4385] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" HandleID="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"calico-apiserver-564655bcc6-zq72f", "timestamp":"2026-04-16 23:56:36.124888682 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000373ce0)} Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.129 [INFO][4385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.272 [INFO][4385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.272 [INFO][4385] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.328 [INFO][4385] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.346 [INFO][4385] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.352 [INFO][4385] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.355 [INFO][4385] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388269 containerd[1632]: 2026-04-16 23:56:36.357 [INFO][4385] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.357 [INFO][4385] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.359 [INFO][4385] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.362 [INFO][4385] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.367 [INFO][4385] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.68/26] block=192.168.5.64/26 handle="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.367 [INFO][4385] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.68/26] handle="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.367 [INFO][4385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:36.388420 containerd[1632]: 2026-04-16 23:56:36.367 [INFO][4385] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.68/26] IPv6=[] ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" HandleID="k8s-pod-network.8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388526 containerd[1632]: 2026-04-16 23:56:36.370 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0", GenerateName:"calico-apiserver-564655bcc6-", Namespace:"calico-system", SelfLink:"", UID:"788ceb96-f215-499c-9d50-ef5dc95ae426", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564655bcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"calico-apiserver-564655bcc6-zq72f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a146f1c47b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.388568 containerd[1632]: 2026-04-16 23:56:36.370 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.68/32] ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388568 containerd[1632]: 2026-04-16 23:56:36.370 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a146f1c47b ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388568 containerd[1632]: 2026-04-16 23:56:36.374 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.388619 containerd[1632]: 2026-04-16 23:56:36.375 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0", GenerateName:"calico-apiserver-564655bcc6-", Namespace:"calico-system", SelfLink:"", UID:"788ceb96-f215-499c-9d50-ef5dc95ae426", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564655bcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e", Pod:"calico-apiserver-564655bcc6-zq72f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4a146f1c47b", MAC:"b2:e2:b7:3b:93:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:36.388657 containerd[1632]: 2026-04-16 23:56:36.383 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-zq72f" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--zq72f-eth0" Apr 16 23:56:36.407624 containerd[1632]: time="2026-04-16T23:56:36.407581639Z" level=info msg="connecting to shim 8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e" address="unix:///run/containerd/s/e19d41269a88d7dc3e81799d866ed1b719b259ecf7b421c5c194bb034f739dcc" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:36.429312 systemd[1]: Started cri-containerd-8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e.scope - libcontainer container 8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e. Apr 16 23:56:36.471072 containerd[1632]: time="2026-04-16T23:56:36.471032952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-zq72f,Uid:788ceb96-f215-499c-9d50-ef5dc95ae426,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e\"" Apr 16 23:56:36.995500 containerd[1632]: time="2026-04-16T23:56:36.995409809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvjxk,Uid:0d809911-9317-452b-a955-9e9b28c4a3f0,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:37.141876 systemd-networkd[1501]: calib12a2a9d1a9: Link UP Apr 16 23:56:37.142533 systemd-networkd[1501]: calib12a2a9d1a9: Gained carrier Apr 16 23:56:37.157113 containerd[1632]: 2026-04-16 23:56:37.055 [ERROR][4573] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:37.157113 containerd[1632]: 2026-04-16 23:56:37.072 [INFO][4573] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0 coredns-674b8bbfcf- kube-system 0d809911-9317-452b-a955-9e9b28c4a3f0 813 0 2026-04-16 23:56:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 coredns-674b8bbfcf-lvjxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib12a2a9d1a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-" Apr 16 23:56:37.157113 containerd[1632]: 2026-04-16 23:56:37.072 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157113 containerd[1632]: 2026-04-16 23:56:37.110 [INFO][4584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" HandleID="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.115 [INFO][4584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" HandleID="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"coredns-674b8bbfcf-lvjxk", "timestamp":"2026-04-16 23:56:37.110115005 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000194840)} Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.115 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.115 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.115 [INFO][4584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.117 [INFO][4584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.120 [INFO][4584] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.123 [INFO][4584] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.125 [INFO][4584] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157651 containerd[1632]: 2026-04-16 23:56:37.127 [INFO][4584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.127 [INFO][4584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.128 [INFO][4584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7 Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.132 [INFO][4584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.137 [INFO][4584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.69/26] block=192.168.5.64/26 handle="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.137 [INFO][4584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.69/26] handle="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.137 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:37.157810 containerd[1632]: 2026-04-16 23:56:37.137 [INFO][4584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.69/26] IPv6=[] ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" HandleID="k8s-pod-network.2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.139 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d809911-9317-452b-a955-9e9b28c4a3f0", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"coredns-674b8bbfcf-lvjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib12a2a9d1a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.140 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.69/32] ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.140 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib12a2a9d1a9 ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.142 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.143 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0d809911-9317-452b-a955-9e9b28c4a3f0", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7", Pod:"coredns-674b8bbfcf-lvjxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib12a2a9d1a9", MAC:"6e:3a:60:6c:8b:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:37.157924 containerd[1632]: 2026-04-16 23:56:37.153 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" Namespace="kube-system" Pod="coredns-674b8bbfcf-lvjxk" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--lvjxk-eth0" Apr 16 23:56:37.176836 containerd[1632]: time="2026-04-16T23:56:37.176632331Z" level=info msg="connecting to shim 2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7" address="unix:///run/containerd/s/be39c112d9f07750a86d9a0539f390a2368068bb39e8c3c8a8695a9187c9bb07" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:37.203304 systemd[1]: Started cri-containerd-2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7.scope - libcontainer container 2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7. Apr 16 23:56:37.244996 containerd[1632]: time="2026-04-16T23:56:37.244955440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvjxk,Uid:0d809911-9317-452b-a955-9e9b28c4a3f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7\"" Apr 16 23:56:37.250278 containerd[1632]: time="2026-04-16T23:56:37.250194171Z" level=info msg="CreateContainer within sandbox \"2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:56:37.264113 containerd[1632]: time="2026-04-16T23:56:37.263220685Z" level=info msg="Container 94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:37.272396 containerd[1632]: time="2026-04-16T23:56:37.272358882Z" level=info msg="CreateContainer within sandbox \"2c9503aa5c44181569812bbcf666ee5b3a75932b7bb8c28cde1832c397b72cd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e\"" Apr 16 23:56:37.274788 containerd[1632]: time="2026-04-16T23:56:37.273696496Z" level=info msg="StartContainer for \"94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e\"" Apr 16 23:56:37.274788 containerd[1632]: time="2026-04-16T23:56:37.274630094Z" level=info msg="connecting to shim 94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e" address="unix:///run/containerd/s/be39c112d9f07750a86d9a0539f390a2368068bb39e8c3c8a8695a9187c9bb07" protocol=ttrpc version=3 Apr 16 23:56:37.294345 systemd[1]: Started cri-containerd-94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e.scope - libcontainer container 94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e. Apr 16 23:56:37.325046 containerd[1632]: time="2026-04-16T23:56:37.325011714Z" level=info msg="StartContainer for \"94648314c7a21e2d423bbd15ac50d09e8e9fbff86e2175600caf16c878aba12e\" returns successfully" Apr 16 23:56:37.359294 systemd-networkd[1501]: cali48e3e555c36: Gained IPv6LL Apr 16 23:56:37.995899 containerd[1632]: time="2026-04-16T23:56:37.995606737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wdtnt,Uid:5d8fc39f-6b99-42c2-8941-a9e5c4ae0512,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:38.127265 systemd-networkd[1501]: cali4a146f1c47b: Gained IPv6LL Apr 16 23:56:38.157022 systemd-networkd[1501]: cali843d885f492: Link UP Apr 16 23:56:38.158365 systemd-networkd[1501]: cali843d885f492: Gained carrier Apr 16 23:56:38.181185 kubelet[2787]: I0416 23:56:38.180666 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lvjxk" podStartSLOduration=37.180650299 podStartE2EDuration="37.180650299s" podCreationTimestamp="2026-04-16 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:38.179963296 +0000 UTC m=+43.279655427" watchObservedRunningTime="2026-04-16 23:56:38.180650299 +0000 UTC m=+43.280342440" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.046 [ERROR][4694] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.065 [INFO][4694] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0 goldmane-5b85766d88- calico-system 5d8fc39f-6b99-42c2-8941-a9e5c4ae0512 817 0 2026-04-16 23:56:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 goldmane-5b85766d88-wdtnt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali843d885f492 [] [] }} ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.065 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.112 [INFO][4706] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" HandleID="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Workload="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.117 [INFO][4706] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" HandleID="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Workload="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"goldmane-5b85766d88-wdtnt", "timestamp":"2026-04-16 23:56:38.112054626 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ea580)} Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.117 [INFO][4706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.117 [INFO][4706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.117 [INFO][4706] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.120 [INFO][4706] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.125 [INFO][4706] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.130 [INFO][4706] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.133 [INFO][4706] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.137 [INFO][4706] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.137 [INFO][4706] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.139 [INFO][4706] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.144 [INFO][4706] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.149 [INFO][4706] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.70/26] block=192.168.5.64/26 handle="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.149 [INFO][4706] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.70/26] handle="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.149 [INFO][4706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:38.184668 containerd[1632]: 2026-04-16 23:56:38.149 [INFO][4706] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.70/26] IPv6=[] ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" HandleID="k8s-pod-network.1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Workload="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.155 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"goldmane-5b85766d88-wdtnt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali843d885f492", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.155 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.70/32] ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.155 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali843d885f492 ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.159 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.164 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5d8fc39f-6b99-42c2-8941-a9e5c4ae0512", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd", Pod:"goldmane-5b85766d88-wdtnt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali843d885f492", MAC:"22:40:6b:9a:26:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:38.186725 containerd[1632]: 2026-04-16 23:56:38.177 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" Namespace="calico-system" Pod="goldmane-5b85766d88-wdtnt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-goldmane--5b85766d88--wdtnt-eth0" Apr 16 23:56:38.191266 systemd-networkd[1501]: cali567f2fc8c34: Gained IPv6LL Apr 16 23:56:38.222308 containerd[1632]: time="2026-04-16T23:56:38.222253165Z" level=info msg="connecting to shim 1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd" address="unix:///run/containerd/s/ae22d3f7aca16e3e591bb085743e25f27157ad5c63971d6d66582008cf65d5c2" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:38.256668 systemd[1]: Started cri-containerd-1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd.scope - libcontainer container 1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd. Apr 16 23:56:38.301638 containerd[1632]: time="2026-04-16T23:56:38.301564424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wdtnt,Uid:5d8fc39f-6b99-42c2-8941-a9e5c4ae0512,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd\"" Apr 16 23:56:38.639464 systemd-networkd[1501]: calib12a2a9d1a9: Gained IPv6LL Apr 16 23:56:39.470437 systemd-networkd[1501]: cali843d885f492: Gained IPv6LL Apr 16 23:56:39.642175 containerd[1632]: time="2026-04-16T23:56:39.642123220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:39.643349 containerd[1632]: time="2026-04-16T23:56:39.643255258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 23:56:39.644285 containerd[1632]: time="2026-04-16T23:56:39.644266311Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:39.646183 containerd[1632]: time="2026-04-16T23:56:39.646124553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:39.646628 containerd[1632]: time="2026-04-16T23:56:39.646602240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.370210197s" Apr 16 23:56:39.646751 containerd[1632]: time="2026-04-16T23:56:39.646678634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 23:56:39.648533 containerd[1632]: time="2026-04-16T23:56:39.648506744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:56:39.660017 containerd[1632]: time="2026-04-16T23:56:39.657769836Z" level=info msg="CreateContainer within sandbox \"a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:56:39.676259 containerd[1632]: time="2026-04-16T23:56:39.676212174Z" level=info msg="Container 925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:39.682085 containerd[1632]: time="2026-04-16T23:56:39.682044350Z" level=info msg="CreateContainer within sandbox \"a1fff4d78d70a67b6eeda59781a5021fb0c5b3599d970f25030db87821030bee\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0\"" Apr 16 23:56:39.682575 containerd[1632]: time="2026-04-16T23:56:39.682561300Z" level=info msg="StartContainer for \"925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0\"" Apr 16 23:56:39.683543 containerd[1632]: time="2026-04-16T23:56:39.683506949Z" level=info msg="connecting to shim 925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0" address="unix:///run/containerd/s/5adc17723f27a91573ae973234b1814975309e68d8f5cfcccd45a65dd4e188a1" protocol=ttrpc version=3 Apr 16 23:56:39.701269 systemd[1]: Started cri-containerd-925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0.scope - libcontainer container 925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0. Apr 16 23:56:39.756244 containerd[1632]: time="2026-04-16T23:56:39.755784665Z" level=info msg="StartContainer for \"925a9c563f93eed6563b9eb774c06557ad72eccac444f5736cb770034c846dc0\" returns successfully" Apr 16 23:56:39.995010 containerd[1632]: time="2026-04-16T23:56:39.994971858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pclrw,Uid:6c71639d-8d3c-4690-91b5-1523b8296aba,Namespace:kube-system,Attempt:0,}" Apr 16 23:56:39.995332 containerd[1632]: time="2026-04-16T23:56:39.994971848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-rnsgt,Uid:b7d33848-0070-4073-8ca8-c3a5995310bd,Namespace:calico-system,Attempt:0,}" Apr 16 23:56:40.096756 systemd-networkd[1501]: cali2b41bc1a208: Link UP Apr 16 23:56:40.097865 systemd-networkd[1501]: cali2b41bc1a208: Gained carrier Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.027 [ERROR][4861] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.039 [INFO][4861] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0 calico-apiserver-564655bcc6- calico-system b7d33848-0070-4073-8ca8-c3a5995310bd 810 0 2026-04-16 23:56:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564655bcc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 calico-apiserver-564655bcc6-rnsgt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2b41bc1a208 [] [] }} ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.040 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.064 [INFO][4875] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" HandleID="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.069 [INFO][4875] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" HandleID="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"calico-apiserver-564655bcc6-rnsgt", "timestamp":"2026-04-16 23:56:40.064353262 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.069 [INFO][4875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.069 [INFO][4875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.069 [INFO][4875] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.071 [INFO][4875] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.074 [INFO][4875] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.077 [INFO][4875] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.079 [INFO][4875] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.080 [INFO][4875] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.080 [INFO][4875] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.081 [INFO][4875] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.085 [INFO][4875] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.090 [INFO][4875] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.71/26] block=192.168.5.64/26 handle="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.090 [INFO][4875] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.71/26] handle="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.090 [INFO][4875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:40.111687 containerd[1632]: 2026-04-16 23:56:40.090 [INFO][4875] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.71/26] IPv6=[] ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" HandleID="k8s-pod-network.83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Workload="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.093 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0", GenerateName:"calico-apiserver-564655bcc6-", Namespace:"calico-system", SelfLink:"", UID:"b7d33848-0070-4073-8ca8-c3a5995310bd", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564655bcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"calico-apiserver-564655bcc6-rnsgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2b41bc1a208", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.093 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.71/32] ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.093 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b41bc1a208 ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.098 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.099 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0", GenerateName:"calico-apiserver-564655bcc6-", Namespace:"calico-system", SelfLink:"", UID:"b7d33848-0070-4073-8ca8-c3a5995310bd", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564655bcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c", Pod:"calico-apiserver-564655bcc6-rnsgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2b41bc1a208", MAC:"d6:fc:1e:bc:62:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:40.112228 containerd[1632]: 2026-04-16 23:56:40.108 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" Namespace="calico-system" Pod="calico-apiserver-564655bcc6-rnsgt" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-calico--apiserver--564655bcc6--rnsgt-eth0" Apr 16 23:56:40.131668 containerd[1632]: time="2026-04-16T23:56:40.131592022Z" level=info msg="connecting to shim 83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c" address="unix:///run/containerd/s/6d4836c10c4f7036fa1b162aff3d5db3f227c6252dd937201ad070d6526e100e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:40.154342 systemd[1]: Started cri-containerd-83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c.scope - libcontainer container 83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c. Apr 16 23:56:40.208019 kubelet[2787]: I0416 23:56:40.206042 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57bd645688-ksxcn" podStartSLOduration=25.834667124 podStartE2EDuration="29.206027817s" podCreationTimestamp="2026-04-16 23:56:11 +0000 UTC" firstStartedPulling="2026-04-16 23:56:36.275944424 +0000 UTC m=+41.375636565" lastFinishedPulling="2026-04-16 23:56:39.647305117 +0000 UTC m=+44.746997258" observedRunningTime="2026-04-16 23:56:40.20576801 +0000 UTC m=+45.305460141" watchObservedRunningTime="2026-04-16 23:56:40.206027817 +0000 UTC m=+45.305719958" Apr 16 23:56:40.240646 systemd-networkd[1501]: cali900d65cadc2: Link UP Apr 16 23:56:40.241115 systemd-networkd[1501]: cali900d65cadc2: Gained carrier Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.030 [ERROR][4849] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.040 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0 coredns-674b8bbfcf- kube-system 6c71639d-8d3c-4690-91b5-1523b8296aba 806 0 2026-04-16 23:56:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-391826f4f6 coredns-674b8bbfcf-pclrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali900d65cadc2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.040 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.064 [INFO][4877] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" HandleID="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.071 [INFO][4877] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" HandleID="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f74e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-391826f4f6", "pod":"coredns-674b8bbfcf-pclrw", "timestamp":"2026-04-16 23:56:40.064765136 +0000 UTC"}, Hostname:"ci-4459-2-4-n-391826f4f6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027cf20)} Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.071 [INFO][4877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.090 [INFO][4877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.091 [INFO][4877] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-391826f4f6' Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.178 [INFO][4877] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.199 [INFO][4877] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.212 [INFO][4877] ipam/ipam.go 526: Trying affinity for 192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.215 [INFO][4877] ipam/ipam.go 160: Attempting to load block cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.217 [INFO][4877] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.217 [INFO][4877] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.223 [INFO][4877] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480 Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.227 [INFO][4877] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.234 [INFO][4877] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.5.72/26] block=192.168.5.64/26 handle="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.234 [INFO][4877] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.5.72/26] handle="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" host="ci-4459-2-4-n-391826f4f6" Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.234 [INFO][4877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:56:40.257375 containerd[1632]: 2026-04-16 23:56:40.234 [INFO][4877] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.5.72/26] IPv6=[] ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" HandleID="k8s-pod-network.7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Workload="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.237 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6c71639d-8d3c-4690-91b5-1523b8296aba", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"", Pod:"coredns-674b8bbfcf-pclrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900d65cadc2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.237 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.5.72/32] ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.237 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali900d65cadc2 ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.240 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.240 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6c71639d-8d3c-4690-91b5-1523b8296aba", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-391826f4f6", ContainerID:"7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480", Pod:"coredns-674b8bbfcf-pclrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali900d65cadc2", MAC:"8a:a7:59:5c:73:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:56:40.257831 containerd[1632]: 2026-04-16 23:56:40.251 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" Namespace="kube-system" Pod="coredns-674b8bbfcf-pclrw" WorkloadEndpoint="ci--4459--2--4--n--391826f4f6-k8s-coredns--674b8bbfcf--pclrw-eth0" Apr 16 23:56:40.274403 containerd[1632]: time="2026-04-16T23:56:40.274285485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564655bcc6-rnsgt,Uid:b7d33848-0070-4073-8ca8-c3a5995310bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c\"" Apr 16 23:56:40.285347 containerd[1632]: time="2026-04-16T23:56:40.285304218Z" level=info msg="connecting to shim 7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480" address="unix:///run/containerd/s/f6137e59dddeaa7cc7aede26eca57b37982b44db1dc09e7564cbed995f960be6" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:56:40.314372 systemd[1]: Started cri-containerd-7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480.scope - libcontainer container 7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480. Apr 16 23:56:40.357636 containerd[1632]: time="2026-04-16T23:56:40.357304143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pclrw,Uid:6c71639d-8d3c-4690-91b5-1523b8296aba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480\"" Apr 16 23:56:40.362554 containerd[1632]: time="2026-04-16T23:56:40.362536531Z" level=info msg="CreateContainer within sandbox \"7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:56:40.369087 containerd[1632]: time="2026-04-16T23:56:40.368751734Z" level=info msg="Container 57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:40.374051 containerd[1632]: time="2026-04-16T23:56:40.374023460Z" level=info msg="CreateContainer within sandbox \"7f263383d620f90ab7b10a28edb6b7b3ee2602ceab63ee9e0e531b0540b7b480\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c\"" Apr 16 23:56:40.374375 containerd[1632]: time="2026-04-16T23:56:40.374348394Z" level=info msg="StartContainer for \"57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c\"" Apr 16 23:56:40.375065 containerd[1632]: time="2026-04-16T23:56:40.375046705Z" level=info msg="connecting to shim 57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c" address="unix:///run/containerd/s/f6137e59dddeaa7cc7aede26eca57b37982b44db1dc09e7564cbed995f960be6" protocol=ttrpc version=3 Apr 16 23:56:40.389236 systemd[1]: Started cri-containerd-57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c.scope - libcontainer container 57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c. Apr 16 23:56:40.416948 containerd[1632]: time="2026-04-16T23:56:40.416922214Z" level=info msg="StartContainer for \"57f1fa801d6ce037ce150744912f3565011a04c324e18c0bbc704aa2c95cc79c\" returns successfully" Apr 16 23:56:41.095344 kubelet[2787]: I0416 23:56:41.095214 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:41.187956 kubelet[2787]: I0416 23:56:41.187880 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pclrw" podStartSLOduration=40.187862618 podStartE2EDuration="40.187862618s" podCreationTimestamp="2026-04-16 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:56:41.187281853 +0000 UTC m=+46.286973994" watchObservedRunningTime="2026-04-16 23:56:41.187862618 +0000 UTC m=+46.287554779" Apr 16 23:56:41.368437 containerd[1632]: time="2026-04-16T23:56:41.368287621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:41.369605 containerd[1632]: time="2026-04-16T23:56:41.369586117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 23:56:41.371148 containerd[1632]: time="2026-04-16T23:56:41.370599257Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:41.372703 containerd[1632]: time="2026-04-16T23:56:41.372688493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:41.373618 containerd[1632]: time="2026-04-16T23:56:41.373604278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.725044053s" Apr 16 23:56:41.373692 containerd[1632]: time="2026-04-16T23:56:41.373682415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 23:56:41.376024 containerd[1632]: time="2026-04-16T23:56:41.376008282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:56:41.379371 containerd[1632]: time="2026-04-16T23:56:41.379354722Z" level=info msg="CreateContainer within sandbox \"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:56:41.396465 containerd[1632]: time="2026-04-16T23:56:41.396436219Z" level=info msg="Container f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:41.414201 containerd[1632]: time="2026-04-16T23:56:41.414169828Z" level=info msg="CreateContainer within sandbox \"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2\"" Apr 16 23:56:41.417717 containerd[1632]: time="2026-04-16T23:56:41.417699181Z" level=info msg="StartContainer for \"f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2\"" Apr 16 23:56:41.420360 containerd[1632]: time="2026-04-16T23:56:41.420322481Z" level=info msg="connecting to shim f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2" address="unix:///run/containerd/s/b3bc0363cfceea69722b852b61e3891bbf7d49de866b255ed26adcdf768572aa" protocol=ttrpc version=3 Apr 16 23:56:41.455686 systemd[1]: Started cri-containerd-f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2.scope - libcontainer container f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2. Apr 16 23:56:41.532465 containerd[1632]: time="2026-04-16T23:56:41.532432010Z" level=info msg="StartContainer for \"f8aeb4b3e58175e130c0f51cfcc980289e0aef84a2c2a2aac1e5cadf7ffb1be2\" returns successfully" Apr 16 23:56:41.832558 systemd-networkd[1501]: vxlan.calico: Link UP Apr 16 23:56:41.832569 systemd-networkd[1501]: vxlan.calico: Gained carrier Apr 16 23:56:41.840250 systemd-networkd[1501]: cali2b41bc1a208: Gained IPv6LL Apr 16 23:56:42.160252 systemd-networkd[1501]: cali900d65cadc2: Gained IPv6LL Apr 16 23:56:43.758695 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Apr 16 23:56:45.518918 containerd[1632]: time="2026-04-16T23:56:45.518872972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.519815 containerd[1632]: time="2026-04-16T23:56:45.519728429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 23:56:45.520855 containerd[1632]: time="2026-04-16T23:56:45.520830415Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.522762 containerd[1632]: time="2026-04-16T23:56:45.522737631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:45.523237 containerd[1632]: time="2026-04-16T23:56:45.523219872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.147112602s" Apr 16 23:56:45.523298 containerd[1632]: time="2026-04-16T23:56:45.523288774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 23:56:45.524483 containerd[1632]: time="2026-04-16T23:56:45.524462968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:56:45.526629 containerd[1632]: time="2026-04-16T23:56:45.526602842Z" level=info msg="CreateContainer within sandbox \"8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:56:45.535427 containerd[1632]: time="2026-04-16T23:56:45.535264263Z" level=info msg="Container ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:45.546626 containerd[1632]: time="2026-04-16T23:56:45.546600687Z" level=info msg="CreateContainer within sandbox \"8e2a15551559008168e7699c724e9b066f9050fd61ed7eee15f9395be0892b8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e\"" Apr 16 23:56:45.547157 containerd[1632]: time="2026-04-16T23:56:45.547116978Z" level=info msg="StartContainer for \"ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e\"" Apr 16 23:56:45.547947 containerd[1632]: time="2026-04-16T23:56:45.547928109Z" level=info msg="connecting to shim ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e" address="unix:///run/containerd/s/e19d41269a88d7dc3e81799d866ed1b719b259ecf7b421c5c194bb034f739dcc" protocol=ttrpc version=3 Apr 16 23:56:45.570234 systemd[1]: Started cri-containerd-ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e.scope - libcontainer container ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e. Apr 16 23:56:45.612316 containerd[1632]: time="2026-04-16T23:56:45.612276997Z" level=info msg="StartContainer for \"ac33e67ac0a04d005cc824e1ce599b63d19e6f65fbc168fffdd9848336ce176e\" returns successfully" Apr 16 23:56:47.203955 kubelet[2787]: I0416 23:56:47.203918 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:47.557005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495190956.mount: Deactivated successfully. Apr 16 23:56:47.868173 containerd[1632]: time="2026-04-16T23:56:47.867903095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:47.869747 containerd[1632]: time="2026-04-16T23:56:47.869613490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 23:56:47.870659 containerd[1632]: time="2026-04-16T23:56:47.870637750Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:47.872686 containerd[1632]: time="2026-04-16T23:56:47.872658779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:47.873194 containerd[1632]: time="2026-04-16T23:56:47.873177023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.348610591s" Apr 16 23:56:47.873312 containerd[1632]: time="2026-04-16T23:56:47.873242781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 23:56:47.874018 containerd[1632]: time="2026-04-16T23:56:47.873995055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:56:47.876902 containerd[1632]: time="2026-04-16T23:56:47.876854076Z" level=info msg="CreateContainer within sandbox \"1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:56:47.884997 containerd[1632]: time="2026-04-16T23:56:47.883312773Z" level=info msg="Container 29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:47.904848 containerd[1632]: time="2026-04-16T23:56:47.904816083Z" level=info msg="CreateContainer within sandbox \"1f92e73827f3c0f064c7d95de17dd129fb98a337cb74a40a0f562daacc372bdd\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85\"" Apr 16 23:56:47.906248 containerd[1632]: time="2026-04-16T23:56:47.906214702Z" level=info msg="StartContainer for \"29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85\"" Apr 16 23:56:47.907268 containerd[1632]: time="2026-04-16T23:56:47.907248807Z" level=info msg="connecting to shim 29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85" address="unix:///run/containerd/s/ae22d3f7aca16e3e591bb085743e25f27157ad5c63971d6d66582008cf65d5c2" protocol=ttrpc version=3 Apr 16 23:56:47.929236 systemd[1]: Started cri-containerd-29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85.scope - libcontainer container 29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85. Apr 16 23:56:47.970156 containerd[1632]: time="2026-04-16T23:56:47.970103977Z" level=info msg="StartContainer for \"29c857f9581dd39cebcd6a73f9063e4bbdbfc63122610135245804c631b4ae85\" returns successfully" Apr 16 23:56:48.230673 kubelet[2787]: I0416 23:56:48.228952 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-564655bcc6-zq72f" podStartSLOduration=29.177644942 podStartE2EDuration="38.228932074s" podCreationTimestamp="2026-04-16 23:56:10 +0000 UTC" firstStartedPulling="2026-04-16 23:56:36.472635949 +0000 UTC m=+41.572328090" lastFinishedPulling="2026-04-16 23:56:45.523923091 +0000 UTC m=+50.623615222" observedRunningTime="2026-04-16 23:56:46.20971828 +0000 UTC m=+51.309410431" watchObservedRunningTime="2026-04-16 23:56:48.228932074 +0000 UTC m=+53.328624245" Apr 16 23:56:48.346497 containerd[1632]: time="2026-04-16T23:56:48.346431110Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:48.347529 containerd[1632]: time="2026-04-16T23:56:48.347485396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:56:48.349833 containerd[1632]: time="2026-04-16T23:56:48.349795437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 475.779922ms" Apr 16 23:56:48.349833 containerd[1632]: time="2026-04-16T23:56:48.349830539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 23:56:48.352416 containerd[1632]: time="2026-04-16T23:56:48.352388912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:56:48.355845 containerd[1632]: time="2026-04-16T23:56:48.355315865Z" level=info msg="CreateContainer within sandbox \"83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:56:48.365388 containerd[1632]: time="2026-04-16T23:56:48.365355184Z" level=info msg="Container 220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:48.385206 containerd[1632]: time="2026-04-16T23:56:48.385163300Z" level=info msg="CreateContainer within sandbox \"83f90d70136fc88741d3c740c4360d9e89784cd75d882ccf9d52da717de5491c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563\"" Apr 16 23:56:48.385636 containerd[1632]: time="2026-04-16T23:56:48.385591770Z" level=info msg="StartContainer for \"220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563\"" Apr 16 23:56:48.386929 containerd[1632]: time="2026-04-16T23:56:48.386846065Z" level=info msg="connecting to shim 220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563" address="unix:///run/containerd/s/6d4836c10c4f7036fa1b162aff3d5db3f227c6252dd937201ad070d6526e100e" protocol=ttrpc version=3 Apr 16 23:56:48.410337 systemd[1]: Started cri-containerd-220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563.scope - libcontainer container 220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563. Apr 16 23:56:48.461842 containerd[1632]: time="2026-04-16T23:56:48.461768377Z" level=info msg="StartContainer for \"220d8f670fd25a4358c7cc789bf475d8d16cb6543372aed15210d14990060563\" returns successfully" Apr 16 23:56:49.237850 kubelet[2787]: I0416 23:56:49.236362 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-564655bcc6-rnsgt" podStartSLOduration=31.167770324 podStartE2EDuration="39.236339154s" podCreationTimestamp="2026-04-16 23:56:10 +0000 UTC" firstStartedPulling="2026-04-16 23:56:40.282214162 +0000 UTC m=+45.381906293" lastFinishedPulling="2026-04-16 23:56:48.350782962 +0000 UTC m=+53.450475123" observedRunningTime="2026-04-16 23:56:49.233675613 +0000 UTC m=+54.333367744" watchObservedRunningTime="2026-04-16 23:56:49.236339154 +0000 UTC m=+54.336031325" Apr 16 23:56:49.237850 kubelet[2787]: I0416 23:56:49.236698 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-wdtnt" podStartSLOduration=29.665860135 podStartE2EDuration="39.236690319s" podCreationTimestamp="2026-04-16 23:56:10 +0000 UTC" firstStartedPulling="2026-04-16 23:56:38.303092193 +0000 UTC m=+43.402784334" lastFinishedPulling="2026-04-16 23:56:47.873922387 +0000 UTC m=+52.973614518" observedRunningTime="2026-04-16 23:56:48.231408824 +0000 UTC m=+53.331101005" watchObservedRunningTime="2026-04-16 23:56:49.236690319 +0000 UTC m=+54.336382490" Apr 16 23:56:50.017605 containerd[1632]: time="2026-04-16T23:56:50.017552140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:50.018775 containerd[1632]: time="2026-04-16T23:56:50.018748338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 23:56:50.019946 containerd[1632]: time="2026-04-16T23:56:50.019906009Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:50.021632 containerd[1632]: time="2026-04-16T23:56:50.021601503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:56:50.022282 containerd[1632]: time="2026-04-16T23:56:50.021979447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.669470006s" Apr 16 23:56:50.022282 containerd[1632]: time="2026-04-16T23:56:50.022013599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 23:56:50.026473 containerd[1632]: time="2026-04-16T23:56:50.026408278Z" level=info msg="CreateContainer within sandbox \"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:56:50.035199 containerd[1632]: time="2026-04-16T23:56:50.034554144Z" level=info msg="Container 628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:56:50.041743 containerd[1632]: time="2026-04-16T23:56:50.041712034Z" level=info msg="CreateContainer within sandbox \"eb16a3ffdcb019654fcfcced0b25a7ea5c4a95a477649e02b3b85bec3bdace56\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061\"" Apr 16 23:56:50.042151 containerd[1632]: time="2026-04-16T23:56:50.042119693Z" level=info msg="StartContainer for \"628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061\"" Apr 16 23:56:50.043180 containerd[1632]: time="2026-04-16T23:56:50.043104235Z" level=info msg="connecting to shim 628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061" address="unix:///run/containerd/s/b3bc0363cfceea69722b852b61e3891bbf7d49de866b255ed26adcdf768572aa" protocol=ttrpc version=3 Apr 16 23:56:50.062229 systemd[1]: Started cri-containerd-628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061.scope - libcontainer container 628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061. Apr 16 23:56:50.132103 containerd[1632]: time="2026-04-16T23:56:50.132038042Z" level=info msg="StartContainer for \"628e85d129debbb201688c7fadf3db719dbe3d77b28e9b50ca150f1b8eb60061\" returns successfully" Apr 16 23:56:50.218043 kubelet[2787]: I0416 23:56:50.217962 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:56:50.228774 kubelet[2787]: I0416 23:56:50.228090 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nbzc2" podStartSLOduration=25.560800975 podStartE2EDuration="39.228071181s" podCreationTimestamp="2026-04-16 23:56:11 +0000 UTC" firstStartedPulling="2026-04-16 23:56:36.355294066 +0000 UTC m=+41.454986207" lastFinishedPulling="2026-04-16 23:56:50.022564272 +0000 UTC m=+55.122256413" observedRunningTime="2026-04-16 23:56:50.226952789 +0000 UTC m=+55.326644960" watchObservedRunningTime="2026-04-16 23:56:50.228071181 +0000 UTC m=+55.327763352" Apr 16 23:56:51.083007 kubelet[2787]: I0416 23:56:51.082914 2787 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:56:51.085230 kubelet[2787]: I0416 23:56:51.085193 2787 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:57:05.123751 kubelet[2787]: I0416 23:57:05.123556 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:57:12.692260 kubelet[2787]: I0416 23:57:12.691934 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:57:29.606554 systemd[1]: Started sshd@10-77.42.25.117:22-4.175.71.9:42612.service - OpenSSH per-connection server daemon (4.175.71.9:42612). Apr 16 23:57:29.838895 sshd[5708]: Accepted publickey for core from 4.175.71.9 port 42612 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:29.841973 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:29.851241 systemd-logind[1616]: New session 8 of user core. Apr 16 23:57:29.858348 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:57:30.072241 sshd[5711]: Connection closed by 4.175.71.9 port 42612 Apr 16 23:57:30.072775 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:30.080183 systemd-logind[1616]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:57:30.081592 systemd[1]: sshd@10-77.42.25.117:22-4.175.71.9:42612.service: Deactivated successfully. Apr 16 23:57:30.086068 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:57:30.090186 systemd-logind[1616]: Removed session 8. Apr 16 23:57:35.120786 systemd[1]: Started sshd@11-77.42.25.117:22-4.175.71.9:42616.service - OpenSSH per-connection server daemon (4.175.71.9:42616). Apr 16 23:57:35.340939 sshd[5726]: Accepted publickey for core from 4.175.71.9 port 42616 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:35.344007 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:35.355229 systemd-logind[1616]: New session 9 of user core. Apr 16 23:57:35.360374 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:57:35.505413 sshd[5729]: Connection closed by 4.175.71.9 port 42616 Apr 16 23:57:35.506372 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:35.510308 systemd[1]: sshd@11-77.42.25.117:22-4.175.71.9:42616.service: Deactivated successfully. Apr 16 23:57:35.512033 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:57:35.512662 systemd-logind[1616]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:57:35.514199 systemd-logind[1616]: Removed session 9. Apr 16 23:57:40.558752 systemd[1]: Started sshd@12-77.42.25.117:22-4.175.71.9:36900.service - OpenSSH per-connection server daemon (4.175.71.9:36900). Apr 16 23:57:40.762893 sshd[5768]: Accepted publickey for core from 4.175.71.9 port 36900 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:40.765677 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:40.775796 systemd-logind[1616]: New session 10 of user core. Apr 16 23:57:40.782411 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:57:40.929781 sshd[5771]: Connection closed by 4.175.71.9 port 36900 Apr 16 23:57:40.930989 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:40.935688 systemd[1]: sshd@12-77.42.25.117:22-4.175.71.9:36900.service: Deactivated successfully. Apr 16 23:57:40.938091 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:57:40.939224 systemd-logind[1616]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:57:40.941023 systemd-logind[1616]: Removed session 10. Apr 16 23:57:45.972322 systemd[1]: Started sshd@13-77.42.25.117:22-4.175.71.9:53602.service - OpenSSH per-connection server daemon (4.175.71.9:53602). Apr 16 23:57:46.167330 sshd[5801]: Accepted publickey for core from 4.175.71.9 port 53602 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:46.171733 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:46.185593 systemd-logind[1616]: New session 11 of user core. Apr 16 23:57:46.193422 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:57:46.351305 sshd[5804]: Connection closed by 4.175.71.9 port 53602 Apr 16 23:57:46.352957 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:46.357001 systemd[1]: sshd@13-77.42.25.117:22-4.175.71.9:53602.service: Deactivated successfully. Apr 16 23:57:46.359110 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:57:46.360259 systemd-logind[1616]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:57:46.361646 systemd-logind[1616]: Removed session 11. Apr 16 23:57:46.391099 systemd[1]: Started sshd@14-77.42.25.117:22-4.175.71.9:53606.service - OpenSSH per-connection server daemon (4.175.71.9:53606). Apr 16 23:57:46.578193 sshd[5817]: Accepted publickey for core from 4.175.71.9 port 53606 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:46.580296 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:46.590478 systemd-logind[1616]: New session 12 of user core. Apr 16 23:57:46.597379 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:57:46.791894 sshd[5820]: Connection closed by 4.175.71.9 port 53606 Apr 16 23:57:46.792447 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:46.795857 systemd-logind[1616]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:57:46.796149 systemd[1]: sshd@14-77.42.25.117:22-4.175.71.9:53606.service: Deactivated successfully. Apr 16 23:57:46.798018 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:57:46.799685 systemd-logind[1616]: Removed session 12. Apr 16 23:57:46.831887 systemd[1]: Started sshd@15-77.42.25.117:22-4.175.71.9:53610.service - OpenSSH per-connection server daemon (4.175.71.9:53610). Apr 16 23:57:47.020918 sshd[5830]: Accepted publickey for core from 4.175.71.9 port 53610 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:47.023405 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:47.030120 systemd-logind[1616]: New session 13 of user core. Apr 16 23:57:47.036363 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:57:47.245258 sshd[5833]: Connection closed by 4.175.71.9 port 53610 Apr 16 23:57:47.247454 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:47.250880 systemd[1]: sshd@15-77.42.25.117:22-4.175.71.9:53610.service: Deactivated successfully. Apr 16 23:57:47.253737 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:57:47.255461 systemd-logind[1616]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:57:47.260189 systemd-logind[1616]: Removed session 13. Apr 16 23:57:50.301587 systemd[1]: sshd@7-77.42.25.117:22-184.181.217.198:40182.service: Deactivated successfully. Apr 16 23:57:52.294826 systemd[1]: Started sshd@16-77.42.25.117:22-4.175.71.9:53618.service - OpenSSH per-connection server daemon (4.175.71.9:53618). Apr 16 23:57:52.511862 sshd[5870]: Accepted publickey for core from 4.175.71.9 port 53618 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:52.514865 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:52.525753 systemd-logind[1616]: New session 14 of user core. Apr 16 23:57:52.531360 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:57:52.704323 sshd[5873]: Connection closed by 4.175.71.9 port 53618 Apr 16 23:57:52.705319 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:52.712372 systemd[1]: sshd@16-77.42.25.117:22-4.175.71.9:53618.service: Deactivated successfully. Apr 16 23:57:52.716771 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:57:52.718411 systemd-logind[1616]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:57:52.722035 systemd-logind[1616]: Removed session 14. Apr 16 23:57:52.749047 systemd[1]: Started sshd@17-77.42.25.117:22-4.175.71.9:53622.service - OpenSSH per-connection server daemon (4.175.71.9:53622). Apr 16 23:57:52.961225 sshd[5884]: Accepted publickey for core from 4.175.71.9 port 53622 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:52.963285 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:52.972281 systemd-logind[1616]: New session 15 of user core. Apr 16 23:57:52.977349 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:57:53.370606 sshd[5887]: Connection closed by 4.175.71.9 port 53622 Apr 16 23:57:53.372413 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:53.378211 systemd-logind[1616]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:57:53.379008 systemd[1]: sshd@17-77.42.25.117:22-4.175.71.9:53622.service: Deactivated successfully. Apr 16 23:57:53.382274 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:57:53.386254 systemd-logind[1616]: Removed session 15. Apr 16 23:57:53.412320 systemd[1]: Started sshd@18-77.42.25.117:22-4.175.71.9:53624.service - OpenSSH per-connection server daemon (4.175.71.9:53624). Apr 16 23:57:53.615313 sshd[5897]: Accepted publickey for core from 4.175.71.9 port 53624 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:53.618178 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:53.627277 systemd-logind[1616]: New session 16 of user core. Apr 16 23:57:53.634447 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:57:54.375152 sshd[5900]: Connection closed by 4.175.71.9 port 53624 Apr 16 23:57:54.376664 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:54.380584 systemd-logind[1616]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:57:54.382221 systemd[1]: sshd@18-77.42.25.117:22-4.175.71.9:53624.service: Deactivated successfully. Apr 16 23:57:54.384691 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:57:54.387707 systemd-logind[1616]: Removed session 16. Apr 16 23:57:54.414363 systemd[1]: Started sshd@19-77.42.25.117:22-4.175.71.9:53630.service - OpenSSH per-connection server daemon (4.175.71.9:53630). Apr 16 23:57:54.604203 sshd[5925]: Accepted publickey for core from 4.175.71.9 port 53630 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:54.606569 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:54.615246 systemd-logind[1616]: New session 17 of user core. Apr 16 23:57:54.625357 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:57:54.904013 sshd[5928]: Connection closed by 4.175.71.9 port 53630 Apr 16 23:57:54.905505 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:54.909468 systemd-logind[1616]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:57:54.909870 systemd[1]: sshd@19-77.42.25.117:22-4.175.71.9:53630.service: Deactivated successfully. Apr 16 23:57:54.911640 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:57:54.912987 systemd-logind[1616]: Removed session 17. Apr 16 23:57:54.941771 systemd[1]: Started sshd@20-77.42.25.117:22-4.175.71.9:53640.service - OpenSSH per-connection server daemon (4.175.71.9:53640). Apr 16 23:57:55.132770 sshd[5938]: Accepted publickey for core from 4.175.71.9 port 53640 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:57:55.135396 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:57:55.143996 systemd-logind[1616]: New session 18 of user core. Apr 16 23:57:55.152371 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:57:55.345916 sshd[5943]: Connection closed by 4.175.71.9 port 53640 Apr 16 23:57:55.347069 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Apr 16 23:57:55.354634 systemd[1]: sshd@20-77.42.25.117:22-4.175.71.9:53640.service: Deactivated successfully. Apr 16 23:57:55.359360 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:57:55.361580 systemd-logind[1616]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:57:55.365054 systemd-logind[1616]: Removed session 18. Apr 16 23:58:00.398707 systemd[1]: Started sshd@21-77.42.25.117:22-4.175.71.9:52694.service - OpenSSH per-connection server daemon (4.175.71.9:52694). Apr 16 23:58:00.604341 sshd[5980]: Accepted publickey for core from 4.175.71.9 port 52694 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:58:00.607492 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:58:00.616045 systemd-logind[1616]: New session 19 of user core. Apr 16 23:58:00.624348 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:58:00.790419 sshd[5983]: Connection closed by 4.175.71.9 port 52694 Apr 16 23:58:00.791491 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Apr 16 23:58:00.795761 systemd[1]: sshd@21-77.42.25.117:22-4.175.71.9:52694.service: Deactivated successfully. Apr 16 23:58:00.799393 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:58:00.800448 systemd-logind[1616]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:58:00.801803 systemd-logind[1616]: Removed session 19. Apr 16 23:58:05.837097 systemd[1]: Started sshd@22-77.42.25.117:22-4.175.71.9:41706.service - OpenSSH per-connection server daemon (4.175.71.9:41706). Apr 16 23:58:06.049438 sshd[6013]: Accepted publickey for core from 4.175.71.9 port 41706 ssh2: RSA SHA256:s5+cDtbQjwWFdMS63Oi2OpDWd90LKgkj0MOmWTIERLg Apr 16 23:58:06.051550 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:58:06.058657 systemd-logind[1616]: New session 20 of user core. Apr 16 23:58:06.066452 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:58:06.227775 sshd[6016]: Connection closed by 4.175.71.9 port 41706 Apr 16 23:58:06.229448 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Apr 16 23:58:06.234537 systemd-logind[1616]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:58:06.235432 systemd[1]: sshd@22-77.42.25.117:22-4.175.71.9:41706.service: Deactivated successfully. Apr 16 23:58:06.238088 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:58:06.241342 systemd-logind[1616]: Removed session 20.