Oct 13 05:47:08.756345 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:47:08.756367 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:47:08.756375 kernel: BIOS-provided physical RAM map: Oct 13 05:47:08.756382 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 05:47:08.756388 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 05:47:08.756393 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 05:47:08.756402 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 13 05:47:08.756408 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 13 05:47:08.756414 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 13 05:47:08.756421 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 13 05:47:08.756427 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:47:08.756433 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 05:47:08.756439 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:47:08.756444 kernel: NX (Execute Disable) protection: active Oct 13 05:47:08.756453 kernel: APIC: Static calls initialized Oct 13 05:47:08.756459 kernel: SMBIOS 3.0.0 present. Oct 13 05:47:08.756466 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 13 05:47:08.756472 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:47:08.756479 kernel: Hypervisor detected: KVM Oct 13 05:47:08.756486 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:47:08.756492 kernel: kvm-clock: using sched offset of 3980195483 cycles Oct 13 05:47:08.756499 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:47:08.756509 kernel: tsc: Detected 2400.000 MHz processor Oct 13 05:47:08.756516 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:47:08.756523 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:47:08.756527 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 13 05:47:08.756531 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 05:47:08.756536 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:47:08.756540 kernel: Using GB pages for direct mapping Oct 13 05:47:08.756544 kernel: ACPI: Early table checksum verification disabled Oct 13 05:47:08.756549 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Oct 13 05:47:08.756555 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756559 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756563 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756568 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 13 05:47:08.756572 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756576 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756580 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756584 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:47:08.756589 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Oct 13 05:47:08.756596 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Oct 13 05:47:08.756601 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 13 05:47:08.756605 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Oct 13 05:47:08.756609 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Oct 13 05:47:08.756614 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Oct 13 05:47:08.756619 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Oct 13 05:47:08.756624 kernel: No NUMA configuration found Oct 13 05:47:08.756628 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 13 05:47:08.756633 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Oct 13 05:47:08.756637 kernel: Zone ranges: Oct 13 05:47:08.756642 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:47:08.756646 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 13 05:47:08.756650 kernel: Normal empty Oct 13 05:47:08.756655 kernel: Device empty Oct 13 05:47:08.756659 kernel: Movable zone start for each node Oct 13 05:47:08.756665 kernel: Early memory node ranges Oct 13 05:47:08.756669 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 05:47:08.756673 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 13 05:47:08.756678 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 13 05:47:08.756682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:47:08.756687 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:47:08.756691 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 13 05:47:08.756695 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:47:08.756700 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:47:08.756705 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:47:08.756709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:47:08.756714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:47:08.756718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:47:08.756723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:47:08.756727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:47:08.756732 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:47:08.756736 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:47:08.756741 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:47:08.756746 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:47:08.756750 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:47:08.756755 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:47:08.756759 kernel: CPU topo: Num. cores per package: 2 Oct 13 05:47:08.756763 kernel: CPU topo: Num. threads per package: 2 Oct 13 05:47:08.756768 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 13 05:47:08.756772 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:47:08.756776 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 13 05:47:08.756781 kernel: Booting paravirtualized kernel on KVM Oct 13 05:47:08.756785 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:47:08.756791 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 13 05:47:08.756795 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 13 05:47:08.756800 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 13 05:47:08.756804 kernel: pcpu-alloc: [0] 0 1 Oct 13 05:47:08.756808 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 13 05:47:08.756813 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:47:08.756818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:47:08.756823 kernel: random: crng init done Oct 13 05:47:08.756828 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:47:08.756832 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:47:08.756838 kernel: Fallback order for Node 0: 0 Oct 13 05:47:08.756846 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Oct 13 05:47:08.756853 kernel: Policy zone: DMA32 Oct 13 05:47:08.756860 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:47:08.756867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 05:47:08.756874 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:47:08.756881 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:47:08.756890 kernel: Dynamic Preempt: voluntary Oct 13 05:47:08.756897 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:47:08.756905 kernel: rcu: RCU event tracing is enabled. Oct 13 05:47:08.756913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 05:47:08.756920 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:47:08.756928 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:47:08.756935 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:47:08.756941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:47:08.756949 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 05:47:08.756958 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:47:08.756965 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:47:08.756973 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 05:47:08.756980 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 13 05:47:08.756987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:47:08.756994 kernel: Console: colour VGA+ 80x25 Oct 13 05:47:08.757001 kernel: printk: legacy console [tty0] enabled Oct 13 05:47:08.757006 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:47:08.757011 kernel: ACPI: Core revision 20240827 Oct 13 05:47:08.757021 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:47:08.757025 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:47:08.757030 kernel: x2apic enabled Oct 13 05:47:08.757036 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:47:08.757040 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:47:08.757045 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Oct 13 05:47:08.757050 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Oct 13 05:47:08.757055 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:47:08.757059 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:47:08.757065 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:47:08.758952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:47:08.758962 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:47:08.758970 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:47:08.758977 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:47:08.758985 kernel: active return thunk: retbleed_return_thunk Oct 13 05:47:08.758993 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:47:08.759000 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:47:08.759012 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:47:08.759020 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:47:08.759028 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:47:08.759036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:47:08.759044 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:47:08.759052 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:47:08.759059 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:47:08.759095 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:47:08.759102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:47:08.759107 kernel: landlock: Up and running. Oct 13 05:47:08.759112 kernel: SELinux: Initializing. Oct 13 05:47:08.759117 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:47:08.759121 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:47:08.759126 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:47:08.759131 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:47:08.759136 kernel: ... version: 0 Oct 13 05:47:08.759140 kernel: ... bit width: 48 Oct 13 05:47:08.759146 kernel: ... generic registers: 6 Oct 13 05:47:08.759151 kernel: ... value mask: 0000ffffffffffff Oct 13 05:47:08.759156 kernel: ... max period: 00007fffffffffff Oct 13 05:47:08.759160 kernel: ... fixed-purpose events: 0 Oct 13 05:47:08.759165 kernel: ... event mask: 000000000000003f Oct 13 05:47:08.759169 kernel: signal: max sigframe size: 1776 Oct 13 05:47:08.759174 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:47:08.759179 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:47:08.759184 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:47:08.759189 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:47:08.759195 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:47:08.759200 kernel: .... node #0, CPUs: #1 Oct 13 05:47:08.759204 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:47:08.759209 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Oct 13 05:47:08.759215 kernel: Memory: 1917780K/2047464K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 125140K reserved, 0K cma-reserved) Oct 13 05:47:08.759219 kernel: devtmpfs: initialized Oct 13 05:47:08.759224 kernel: x86/mm: Memory block size: 128MB Oct 13 05:47:08.759229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:47:08.759233 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 05:47:08.759239 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:47:08.759244 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:47:08.759249 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:47:08.759253 kernel: audit: type=2000 audit(1760334426.985:1): state=initialized audit_enabled=0 res=1 Oct 13 05:47:08.759258 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:47:08.759263 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:47:08.759268 kernel: cpuidle: using governor menu Oct 13 05:47:08.759272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:47:08.759277 kernel: dca service started, version 1.12.1 Oct 13 05:47:08.759293 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 13 05:47:08.759298 kernel: PCI: Using configuration type 1 for base access Oct 13 05:47:08.759302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:47:08.759307 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:47:08.759312 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:47:08.759317 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:47:08.759321 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:47:08.759326 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:47:08.759331 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:47:08.759337 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:47:08.759341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:47:08.759346 kernel: ACPI: Interpreter enabled Oct 13 05:47:08.759351 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:47:08.759355 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:47:08.759360 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:47:08.759365 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:47:08.759369 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:47:08.759374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:47:08.759475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:47:08.759523 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:47:08.759564 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:47:08.759570 kernel: PCI host bridge to bus 0000:00 Oct 13 05:47:08.759618 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:47:08.759655 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:47:08.759705 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:47:08.759741 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 13 05:47:08.759777 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:47:08.759812 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 13 05:47:08.760152 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:47:08.760383 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:47:08.760439 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:47:08.760487 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Oct 13 05:47:08.760530 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Oct 13 05:47:08.760574 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Oct 13 05:47:08.760646 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Oct 13 05:47:08.760694 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:47:08.760763 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.760810 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Oct 13 05:47:08.760852 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 13 05:47:08.760893 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 13 05:47:08.760938 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 13 05:47:08.761004 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.761046 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Oct 13 05:47:08.761578 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 13 05:47:08.761634 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 13 05:47:08.761692 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 05:47:08.761761 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.762885 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Oct 13 05:47:08.762950 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 13 05:47:08.762995 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 13 05:47:08.763037 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 05:47:08.763121 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.763185 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Oct 13 05:47:08.763232 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 13 05:47:08.763337 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 13 05:47:08.763379 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 05:47:08.763425 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.763466 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Oct 13 05:47:08.763510 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 13 05:47:08.763551 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 13 05:47:08.763591 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 05:47:08.763637 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.763679 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Oct 13 05:47:08.763719 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 13 05:47:08.763759 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 13 05:47:08.763800 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 05:47:08.763847 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.763887 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Oct 13 05:47:08.763928 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 13 05:47:08.763968 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 13 05:47:08.764009 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 05:47:08.764055 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.764972 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Oct 13 05:47:08.765024 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 13 05:47:08.765082 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 13 05:47:08.765126 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 05:47:08.765173 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 05:47:08.765215 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Oct 13 05:47:08.765256 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 13 05:47:08.765312 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 13 05:47:08.765353 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 05:47:08.765436 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:47:08.765513 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:47:08.765598 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:47:08.765643 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Oct 13 05:47:08.765685 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Oct 13 05:47:08.765734 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:47:08.765775 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 13 05:47:08.765834 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 13 05:47:08.765914 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Oct 13 05:47:08.765995 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Oct 13 05:47:08.766043 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Oct 13 05:47:08.769879 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 13 05:47:08.769983 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Oct 13 05:47:08.770098 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Oct 13 05:47:08.770184 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 13 05:47:08.770278 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Oct 13 05:47:08.770371 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Oct 13 05:47:08.770443 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 13 05:47:08.770521 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 13 05:47:08.770614 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Oct 13 05:47:08.770699 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Oct 13 05:47:08.770771 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 13 05:47:08.770864 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Oct 13 05:47:08.770943 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Oct 13 05:47:08.771033 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Oct 13 05:47:08.771129 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 13 05:47:08.771224 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Oct 13 05:47:08.771314 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Oct 13 05:47:08.771386 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Oct 13 05:47:08.771461 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 13 05:47:08.771473 kernel: acpiphp: Slot [0] registered Oct 13 05:47:08.771563 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 13 05:47:08.771646 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Oct 13 05:47:08.771717 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Oct 13 05:47:08.771787 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Oct 13 05:47:08.771859 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 13 05:47:08.771871 kernel: acpiphp: Slot [0-2] registered Oct 13 05:47:08.771939 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 13 05:47:08.771950 kernel: acpiphp: Slot [0-3] registered Oct 13 05:47:08.772024 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 13 05:47:08.772036 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:47:08.772045 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:47:08.772052 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:47:08.772060 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:47:08.772166 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:47:08.772180 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:47:08.772189 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:47:08.772200 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:47:08.772208 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:47:08.772215 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:47:08.772223 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:47:08.772229 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:47:08.772237 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:47:08.772245 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:47:08.772253 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:47:08.772260 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:47:08.772270 kernel: iommu: Default domain type: Translated Oct 13 05:47:08.772278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:47:08.772298 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:47:08.772306 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:47:08.772314 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 05:47:08.772321 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 13 05:47:08.772418 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:47:08.772502 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:47:08.772578 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:47:08.772593 kernel: vgaarb: loaded Oct 13 05:47:08.772601 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:47:08.772609 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:47:08.772617 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:47:08.772625 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:47:08.772634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:47:08.772641 kernel: pnp: PnP ACPI init Oct 13 05:47:08.772733 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 13 05:47:08.772749 kernel: pnp: PnP ACPI: found 5 devices Oct 13 05:47:08.772757 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:47:08.772765 kernel: NET: Registered PF_INET protocol family Oct 13 05:47:08.772773 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:47:08.772780 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 05:47:08.772787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:47:08.772794 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:47:08.772802 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 05:47:08.772809 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 05:47:08.772819 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:47:08.772827 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:47:08.772835 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:47:08.772842 kernel: NET: Registered PF_XDP protocol family Oct 13 05:47:08.772917 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 13 05:47:08.772994 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 13 05:47:08.774149 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 13 05:47:08.774245 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Oct 13 05:47:08.774348 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Oct 13 05:47:08.774447 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Oct 13 05:47:08.774529 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 13 05:47:08.774606 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 13 05:47:08.774682 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 13 05:47:08.774763 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 13 05:47:08.778137 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 13 05:47:08.778202 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 05:47:08.778250 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 13 05:47:08.778306 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 13 05:47:08.778355 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 05:47:08.778399 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 13 05:47:08.778441 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 13 05:47:08.778483 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 05:47:08.778529 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 13 05:47:08.778593 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 13 05:47:08.778645 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 05:47:08.778689 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 13 05:47:08.778731 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 13 05:47:08.778773 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 05:47:08.778816 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 13 05:47:08.778858 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 13 05:47:08.778899 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 13 05:47:08.778941 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 05:47:08.778983 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 13 05:47:08.779040 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 13 05:47:08.779102 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 13 05:47:08.779144 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 05:47:08.779185 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 13 05:47:08.779227 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 13 05:47:08.779272 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 13 05:47:08.779358 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 05:47:08.779433 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:47:08.779492 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:47:08.779531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:47:08.779568 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 13 05:47:08.779606 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 13 05:47:08.779648 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 13 05:47:08.779694 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 13 05:47:08.779736 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 13 05:47:08.779780 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 13 05:47:08.779820 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 05:47:08.779863 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 13 05:47:08.779902 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 05:47:08.779947 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 13 05:47:08.779987 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 05:47:08.780041 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 13 05:47:08.780932 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 05:47:08.780990 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 13 05:47:08.781049 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 05:47:08.781197 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 13 05:47:08.781243 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 13 05:47:08.781282 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 05:47:08.781340 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 13 05:47:08.781379 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 13 05:47:08.781416 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 05:47:08.781460 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 13 05:47:08.781500 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 13 05:47:08.781538 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 05:47:08.781546 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:47:08.781552 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:47:08.781557 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x22983777dd9, max_idle_ns: 440795300422 ns Oct 13 05:47:08.781562 kernel: Initialise system trusted keyrings Oct 13 05:47:08.781568 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 05:47:08.781573 kernel: Key type asymmetric registered Oct 13 05:47:08.781578 kernel: Asymmetric key parser 'x509' registered Oct 13 05:47:08.781584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:47:08.781589 kernel: io scheduler mq-deadline registered Oct 13 05:47:08.781594 kernel: io scheduler kyber registered Oct 13 05:47:08.781599 kernel: io scheduler bfq registered Oct 13 05:47:08.781652 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 13 05:47:08.781697 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 13 05:47:08.781740 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 13 05:47:08.781783 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 13 05:47:08.781825 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 13 05:47:08.781868 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 13 05:47:08.781910 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 13 05:47:08.781950 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 13 05:47:08.781991 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 13 05:47:08.782043 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 13 05:47:08.782123 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 13 05:47:08.782167 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 13 05:47:08.782208 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 13 05:47:08.782250 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 13 05:47:08.782303 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 13 05:47:08.782345 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 13 05:47:08.782352 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:47:08.782393 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 13 05:47:08.782433 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 13 05:47:08.782441 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:47:08.782447 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 13 05:47:08.782452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:47:08.782457 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:47:08.782462 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:47:08.782468 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:47:08.782473 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:47:08.782519 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 13 05:47:08.782528 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:47:08.782565 kernel: rtc_cmos 00:03: registered as rtc0 Oct 13 05:47:08.782602 kernel: rtc_cmos 00:03: setting system clock to 2025-10-13T05:47:08 UTC (1760334428) Oct 13 05:47:08.782639 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 13 05:47:08.782645 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:47:08.782651 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:47:08.782656 kernel: Segment Routing with IPv6 Oct 13 05:47:08.782661 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:47:08.782667 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:47:08.782672 kernel: Key type dns_resolver registered Oct 13 05:47:08.782677 kernel: IPI shorthand broadcast: enabled Oct 13 05:47:08.782683 kernel: sched_clock: Marking stable (2419100225, 145414508)->(2572741988, -8227255) Oct 13 05:47:08.782688 kernel: registered taskstats version 1 Oct 13 05:47:08.782694 kernel: Loading compiled-in X.509 certificates Oct 13 05:47:08.782699 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:47:08.782704 kernel: Demotion targets for Node 0: null Oct 13 05:47:08.782709 kernel: Key type .fscrypt registered Oct 13 05:47:08.782715 kernel: Key type fscrypt-provisioning registered Oct 13 05:47:08.782720 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:47:08.782725 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:47:08.782730 kernel: ima: No architecture policies found Oct 13 05:47:08.782735 kernel: clk: Disabling unused clocks Oct 13 05:47:08.782740 kernel: Warning: unable to open an initial console. Oct 13 05:47:08.782745 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:47:08.782750 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:47:08.782756 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:47:08.782761 kernel: Run /init as init process Oct 13 05:47:08.782766 kernel: with arguments: Oct 13 05:47:08.782771 kernel: /init Oct 13 05:47:08.782776 kernel: with environment: Oct 13 05:47:08.782781 kernel: HOME=/ Oct 13 05:47:08.782786 kernel: TERM=linux Oct 13 05:47:08.782791 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:47:08.782797 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:47:08.782806 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:47:08.782812 systemd[1]: Detected virtualization kvm. Oct 13 05:47:08.782818 systemd[1]: Detected architecture x86-64. Oct 13 05:47:08.782823 systemd[1]: Running in initrd. Oct 13 05:47:08.782828 systemd[1]: No hostname configured, using default hostname. Oct 13 05:47:08.782834 systemd[1]: Hostname set to . Oct 13 05:47:08.782839 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:47:08.782844 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:47:08.782850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:47:08.782856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:47:08.782861 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:47:08.782867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:47:08.782872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:47:08.782878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:47:08.782884 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:47:08.782891 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:47:08.782896 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:47:08.782902 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:47:08.782907 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:47:08.782912 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:47:08.782917 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:47:08.782923 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:47:08.782928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:47:08.782935 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:47:08.782940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:47:08.782945 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:47:08.782950 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:47:08.782956 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:47:08.782961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:47:08.782967 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:47:08.782972 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:47:08.782977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:47:08.782984 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:47:08.782989 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:47:08.782995 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:47:08.783000 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:47:08.783006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:47:08.783011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:47:08.783018 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:47:08.783048 systemd-journald[216]: Collecting audit messages is disabled. Oct 13 05:47:08.783091 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:47:08.783097 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:47:08.783103 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:47:08.783108 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:47:08.783115 kernel: Bridge firewalling registered Oct 13 05:47:08.783120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:47:08.783127 systemd-journald[216]: Journal started Oct 13 05:47:08.783142 systemd-journald[216]: Runtime Journal (/run/log/journal/dca0362ae341427c9ba526d9c6a289df) is 4.8M, max 38.6M, 33.7M free. Oct 13 05:47:08.748140 systemd-modules-load[217]: Inserted module 'overlay' Oct 13 05:47:08.816693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:47:08.774886 systemd-modules-load[217]: Inserted module 'br_netfilter' Oct 13 05:47:08.818801 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:47:08.821363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:08.823625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:47:08.827474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:47:08.834218 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:47:08.834912 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:47:08.837963 systemd-tmpfiles[235]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:47:08.838156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:47:08.841111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:47:08.844655 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:47:08.850381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:47:08.852999 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:47:08.856920 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:47:08.866303 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:47:08.878280 systemd-resolved[246]: Positive Trust Anchors: Oct 13 05:47:08.878304 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:47:08.878324 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:47:08.880843 systemd-resolved[246]: Defaulting to hostname 'linux'. Oct 13 05:47:08.881608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:47:08.882760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:47:08.928110 kernel: SCSI subsystem initialized Oct 13 05:47:08.934093 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:47:08.942095 kernel: iscsi: registered transport (tcp) Oct 13 05:47:08.956100 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:47:08.956159 kernel: QLogic iSCSI HBA Driver Oct 13 05:47:08.972832 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:47:08.992429 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:47:08.994669 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:47:09.031736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:47:09.034027 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:47:09.084115 kernel: raid6: avx2x4 gen() 35022 MB/s Oct 13 05:47:09.101120 kernel: raid6: avx2x2 gen() 41160 MB/s Oct 13 05:47:09.118277 kernel: raid6: avx2x1 gen() 33247 MB/s Oct 13 05:47:09.118346 kernel: raid6: using algorithm avx2x2 gen() 41160 MB/s Oct 13 05:47:09.137133 kernel: raid6: .... xor() 33453 MB/s, rmw enabled Oct 13 05:47:09.137198 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:47:09.154121 kernel: xor: automatically using best checksumming function avx Oct 13 05:47:09.279128 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:47:09.286684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:47:09.289133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:47:09.315015 systemd-udevd[463]: Using default interface naming scheme 'v255'. Oct 13 05:47:09.318607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:47:09.320278 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:47:09.335449 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Oct 13 05:47:09.355254 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:47:09.357201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:47:09.386382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:47:09.388738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:47:09.429134 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Oct 13 05:47:09.435152 kernel: scsi host0: Virtio SCSI HBA Oct 13 05:47:09.437097 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 13 05:47:09.466109 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:47:09.471337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:47:09.502278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:09.510893 kernel: ACPI: bus type USB registered Oct 13 05:47:09.510913 kernel: usbcore: registered new interface driver usbfs Oct 13 05:47:09.510920 kernel: usbcore: registered new interface driver hub Oct 13 05:47:09.504787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:47:09.512701 kernel: libata version 3.00 loaded. Oct 13 05:47:09.512719 kernel: AES CTR mode by8 optimization enabled Oct 13 05:47:09.514121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:47:09.519083 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:47:09.519103 kernel: usbcore: registered new device driver usb Oct 13 05:47:09.522618 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 13 05:47:09.525332 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 13 05:47:09.525429 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 13 05:47:09.525486 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 13 05:47:09.533109 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 13 05:47:09.543194 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:47:09.543242 kernel: GPT:17805311 != 80003071 Oct 13 05:47:09.543255 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:47:09.543270 kernel: GPT:17805311 != 80003071 Oct 13 05:47:09.543280 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:47:09.543299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:47:09.543309 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 13 05:47:09.565098 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:47:09.565321 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:47:09.570162 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:47:09.570326 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:47:09.570438 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:47:09.574113 kernel: scsi host1: ahci Oct 13 05:47:09.578908 kernel: scsi host2: ahci Oct 13 05:47:09.581090 kernel: scsi host3: ahci Oct 13 05:47:09.599109 kernel: scsi host4: ahci Oct 13 05:47:09.599128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:09.605397 kernel: scsi host5: ahci Oct 13 05:47:09.605531 kernel: scsi host6: ahci Oct 13 05:47:09.607151 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 lpm-pol 1 Oct 13 05:47:09.609138 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 lpm-pol 1 Oct 13 05:47:09.612540 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 lpm-pol 1 Oct 13 05:47:09.612562 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 lpm-pol 1 Oct 13 05:47:09.615994 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 lpm-pol 1 Oct 13 05:47:09.616013 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 lpm-pol 1 Oct 13 05:47:09.619396 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 13 05:47:09.627080 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 13 05:47:09.634086 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 13 05:47:09.639124 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 13 05:47:09.646293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 13 05:47:09.656845 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 13 05:47:09.656961 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 13 05:47:09.657019 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 13 05:47:09.659103 kernel: hub 1-0:1.0: USB hub found Oct 13 05:47:09.659198 kernel: hub 1-0:1.0: 4 ports detected Oct 13 05:47:09.659252 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 13 05:47:09.659327 kernel: hub 2-0:1.0: USB hub found Oct 13 05:47:09.659386 kernel: hub 2-0:1.0: 4 ports detected Oct 13 05:47:09.662945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 13 05:47:09.668357 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 13 05:47:09.668834 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 13 05:47:09.671112 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:47:09.690817 disk-uuid[628]: Primary Header is updated. Oct 13 05:47:09.690817 disk-uuid[628]: Secondary Entries is updated. Oct 13 05:47:09.690817 disk-uuid[628]: Secondary Header is updated. Oct 13 05:47:09.701081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:47:09.894142 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 13 05:47:09.928085 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:47:09.928159 kernel: ata1.00: LPM support broken, forcing max_power Oct 13 05:47:09.928168 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:47:09.930471 kernel: ata1.00: applying bridge limits Oct 13 05:47:09.933143 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 13 05:47:09.941545 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:47:09.941591 kernel: ata1.00: LPM support broken, forcing max_power Oct 13 05:47:09.941605 kernel: ata1.00: configured for UDMA/100 Oct 13 05:47:09.942086 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:47:09.945131 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:47:09.947111 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:47:09.950828 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:47:10.001505 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:47:10.001764 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:47:10.013282 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:47:10.040108 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 05:47:10.046430 kernel: usbcore: registered new interface driver usbhid Oct 13 05:47:10.046480 kernel: usbhid: USB HID core driver Oct 13 05:47:10.055853 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Oct 13 05:47:10.055898 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 13 05:47:10.326317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:47:10.327859 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:47:10.329230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:47:10.330032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:47:10.333953 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:47:10.360677 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:47:10.720984 disk-uuid[629]: The operation has completed successfully. Oct 13 05:47:10.721666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:47:10.760519 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:47:10.760581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:47:10.781761 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:47:10.790262 sh[663]: Success Oct 13 05:47:10.802628 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:47:10.802673 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:47:10.802686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:47:10.811096 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 13 05:47:10.845606 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:47:10.847976 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:47:10.853900 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:47:10.866091 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (675) Oct 13 05:47:10.866119 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:47:10.868806 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:47:10.878137 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:47:10.878161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:47:10.879219 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:47:10.881577 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:47:10.882202 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:47:10.882637 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:47:10.884863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:47:10.891403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:47:10.907318 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (702) Oct 13 05:47:10.907348 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:47:10.909051 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:47:10.915566 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:47:10.915587 kernel: BTRFS info (device sda6): turning on async discard Oct 13 05:47:10.915595 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:47:10.920090 kernel: BTRFS info (device sda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:47:10.920430 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:47:10.921630 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:47:10.950374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:47:10.954486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:47:10.988566 systemd-networkd[844]: lo: Link UP Oct 13 05:47:10.988572 systemd-networkd[844]: lo: Gained carrier Oct 13 05:47:10.992377 systemd-networkd[844]: Enumeration completed Oct 13 05:47:10.992456 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:47:10.992929 systemd[1]: Reached target network.target - Network. Oct 13 05:47:10.993494 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:10.993497 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:47:10.996100 systemd-networkd[844]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:10.996102 systemd-networkd[844]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:47:10.996279 systemd-networkd[844]: eth0: Link UP Oct 13 05:47:10.996366 systemd-networkd[844]: eth1: Link UP Oct 13 05:47:10.996435 systemd-networkd[844]: eth0: Gained carrier Oct 13 05:47:10.996440 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:11.002914 systemd-networkd[844]: eth1: Gained carrier Oct 13 05:47:11.002922 systemd-networkd[844]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:11.014766 ignition[779]: Ignition 2.22.0 Oct 13 05:47:11.014775 ignition[779]: Stage: fetch-offline Oct 13 05:47:11.015731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:47:11.014793 ignition[779]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.017223 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 05:47:11.014798 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.014846 ignition[779]: parsed url from cmdline: "" Oct 13 05:47:11.014848 ignition[779]: no config URL provided Oct 13 05:47:11.014851 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:47:11.014854 ignition[779]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:47:11.014857 ignition[779]: failed to fetch config: resource requires networking Oct 13 05:47:11.014943 ignition[779]: Ignition finished successfully Oct 13 05:47:11.029122 systemd-networkd[844]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 13 05:47:11.035440 ignition[854]: Ignition 2.22.0 Oct 13 05:47:11.035449 ignition[854]: Stage: fetch Oct 13 05:47:11.035526 ignition[854]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.035530 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.035575 ignition[854]: parsed url from cmdline: "" Oct 13 05:47:11.035576 ignition[854]: no config URL provided Oct 13 05:47:11.035579 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:47:11.035582 ignition[854]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:47:11.035753 ignition[854]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 13 05:47:11.035876 ignition[854]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 13 05:47:11.062120 systemd-networkd[844]: eth0: DHCPv4 address 65.108.221.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 13 05:47:11.236646 ignition[854]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 13 05:47:11.243245 ignition[854]: GET result: OK Oct 13 05:47:11.243342 ignition[854]: parsing config with SHA512: 49c8348dce8f3b64d152d4374251c2f6f59d57f12ca89c4300d85e3521d1d6ed669404b3e938c506cdb5fab2f6f680330fcc5b3ea63fc3bf245126e39ba7d251 Oct 13 05:47:11.247904 unknown[854]: fetched base config from "system" Oct 13 05:47:11.247927 unknown[854]: fetched base config from "system" Oct 13 05:47:11.248592 ignition[854]: fetch: fetch complete Oct 13 05:47:11.247937 unknown[854]: fetched user config from "hetzner" Oct 13 05:47:11.248602 ignition[854]: fetch: fetch passed Oct 13 05:47:11.251374 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 05:47:11.248672 ignition[854]: Ignition finished successfully Oct 13 05:47:11.255221 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:47:11.281362 ignition[863]: Ignition 2.22.0 Oct 13 05:47:11.281374 ignition[863]: Stage: kargs Oct 13 05:47:11.281503 ignition[863]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.281512 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.283519 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:47:11.282228 ignition[863]: kargs: kargs passed Oct 13 05:47:11.282278 ignition[863]: Ignition finished successfully Oct 13 05:47:11.286171 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:47:11.302207 ignition[869]: Ignition 2.22.0 Oct 13 05:47:11.302220 ignition[869]: Stage: disks Oct 13 05:47:11.302344 ignition[869]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.302351 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.302817 ignition[869]: disks: disks passed Oct 13 05:47:11.303938 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:47:11.302849 ignition[869]: Ignition finished successfully Oct 13 05:47:11.304954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:47:11.305548 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:47:11.306477 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:47:11.307268 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:47:11.308320 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:47:11.310057 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:47:11.327208 systemd-fsck[878]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Oct 13 05:47:11.331986 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:47:11.333972 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:47:11.455100 kernel: EXT4-fs (sda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:47:11.455396 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:47:11.456349 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:47:11.459004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:47:11.462143 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:47:11.465214 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 05:47:11.467589 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:47:11.467622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:47:11.469942 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:47:11.473112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:47:11.485458 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (886) Oct 13 05:47:11.489632 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:47:11.489687 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:47:11.500790 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:47:11.500865 kernel: BTRFS info (device sda6): turning on async discard Oct 13 05:47:11.500877 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:47:11.507196 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:47:11.516085 coreos-metadata[888]: Oct 13 05:47:11.515 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 13 05:47:11.516085 coreos-metadata[888]: Oct 13 05:47:11.515 INFO Fetch successful Oct 13 05:47:11.518120 coreos-metadata[888]: Oct 13 05:47:11.516 INFO wrote hostname ci-4459-1-0-c-7af444862e to /sysroot/etc/hostname Oct 13 05:47:11.519132 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:47:11.521259 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:47:11.524099 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:47:11.526862 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:47:11.529400 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:47:11.593445 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:47:11.595297 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:47:11.598173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:47:11.614129 kernel: BTRFS info (device sda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:47:11.629260 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:47:11.635326 ignition[1003]: INFO : Ignition 2.22.0 Oct 13 05:47:11.635326 ignition[1003]: INFO : Stage: mount Oct 13 05:47:11.637540 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.637540 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.637540 ignition[1003]: INFO : mount: mount passed Oct 13 05:47:11.637540 ignition[1003]: INFO : Ignition finished successfully Oct 13 05:47:11.637308 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:47:11.638530 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:47:11.864824 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:47:11.866123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:47:11.889097 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1014) Oct 13 05:47:11.891413 kernel: BTRFS info (device sda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:47:11.891443 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:47:11.896790 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:47:11.896819 kernel: BTRFS info (device sda6): turning on async discard Oct 13 05:47:11.896830 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:47:11.899309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:47:11.924908 ignition[1031]: INFO : Ignition 2.22.0 Oct 13 05:47:11.924908 ignition[1031]: INFO : Stage: files Oct 13 05:47:11.926220 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:11.926220 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:11.926220 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:47:11.926220 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:47:11.926220 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:47:11.929865 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:47:11.929865 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:47:11.929865 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:47:11.929865 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:47:11.929865 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 13 05:47:11.928086 unknown[1031]: wrote ssh authorized keys file for user: core Oct 13 05:47:12.179894 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:47:12.419264 systemd-networkd[844]: eth1: Gained IPv6LL Oct 13 05:47:12.492954 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:47:12.493935 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:47:12.509157 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 13 05:47:12.846622 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:47:12.931632 systemd-networkd[844]: eth0: Gained IPv6LL Oct 13 05:47:13.102344 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:47:13.102344 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:47:13.105866 ignition[1031]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:47:13.118364 ignition[1031]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:47:13.118364 ignition[1031]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:47:13.118364 ignition[1031]: INFO : files: files passed Oct 13 05:47:13.118364 ignition[1031]: INFO : Ignition finished successfully Oct 13 05:47:13.110034 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:47:13.115210 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:47:13.119211 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:47:13.124823 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:47:13.124966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:47:13.130724 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:47:13.130724 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:47:13.133021 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:47:13.134388 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:47:13.135726 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:47:13.137789 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:47:13.181799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:47:13.181900 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:47:13.182916 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:47:13.183697 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:47:13.184795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:47:13.185424 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:47:13.205319 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:47:13.207324 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:47:13.229765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:47:13.230329 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:47:13.231518 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:47:13.232617 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:47:13.232710 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:47:13.234022 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:47:13.234742 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:47:13.235920 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:47:13.236962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:47:13.237948 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:47:13.239129 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:47:13.240333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:47:13.241466 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:47:13.242689 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:47:13.243829 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:47:13.245002 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:47:13.246082 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:47:13.246181 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:47:13.247532 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:47:13.248252 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:47:13.249272 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:47:13.249736 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:47:13.250430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:47:13.250495 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:47:13.252215 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:47:13.252299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:47:13.256213 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:47:13.256306 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:47:13.257185 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 05:47:13.257259 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 05:47:13.260135 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:47:13.260929 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:47:13.261022 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:47:13.265170 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:47:13.265783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:47:13.265884 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:47:13.267868 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:47:13.267929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:47:13.271885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:47:13.271937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:47:13.285952 ignition[1085]: INFO : Ignition 2.22.0 Oct 13 05:47:13.285952 ignition[1085]: INFO : Stage: umount Oct 13 05:47:13.288049 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:47:13.288049 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 13 05:47:13.288049 ignition[1085]: INFO : umount: umount passed Oct 13 05:47:13.288049 ignition[1085]: INFO : Ignition finished successfully Oct 13 05:47:13.288474 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:47:13.289429 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:47:13.289508 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:47:13.291688 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:47:13.291740 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:47:13.297954 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:47:13.298017 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:47:13.298899 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 05:47:13.298926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 05:47:13.300052 systemd[1]: Stopped target network.target - Network. Oct 13 05:47:13.301061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:47:13.301105 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:47:13.302173 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:47:13.302507 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:47:13.308545 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:47:13.314941 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:47:13.315994 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:47:13.317170 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:47:13.317197 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:47:13.318095 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:47:13.318114 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:47:13.320083 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:47:13.320125 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:47:13.321192 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:47:13.321219 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:47:13.322317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:47:13.322743 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:47:13.329662 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:47:13.329723 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:47:13.331570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:47:13.331632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:47:13.332841 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:47:13.332899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:47:13.335786 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:47:13.335964 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:47:13.336046 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:47:13.337626 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:47:13.337974 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:47:13.338823 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:47:13.338844 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:47:13.341119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:47:13.342206 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:47:13.342238 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:47:13.343496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:47:13.343527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:47:13.345184 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:47:13.345210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:47:13.345848 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:47:13.345876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:47:13.347871 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:47:13.349628 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:47:13.349666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:47:13.369671 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:47:13.370114 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:47:13.370858 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:47:13.370909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:47:13.371757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:47:13.371790 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:47:13.372588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:47:13.372606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:47:13.373421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:47:13.373450 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:47:13.374726 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:47:13.374752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:47:13.375677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:47:13.375704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:47:13.378163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:47:13.378642 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:47:13.378674 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:47:13.380246 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:47:13.380273 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:47:13.384722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:47:13.384750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:13.389809 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 05:47:13.389841 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 05:47:13.389863 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:47:13.390068 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:47:13.390960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:47:13.391644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:47:13.393693 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:47:13.405849 systemd[1]: Switching root. Oct 13 05:47:13.431811 systemd-journald[216]: Journal stopped Oct 13 05:47:14.178654 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Oct 13 05:47:14.178690 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:47:14.178699 kernel: SELinux: policy capability open_perms=1 Oct 13 05:47:14.178705 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:47:14.178712 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:47:14.178720 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:47:14.178728 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:47:14.178734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:47:14.178740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:47:14.178746 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:47:14.178755 kernel: audit: type=1403 audit(1760334433.565:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:47:14.178762 systemd[1]: Successfully loaded SELinux policy in 49.971ms. Oct 13 05:47:14.178772 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.546ms. Oct 13 05:47:14.178779 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:47:14.178787 systemd[1]: Detected virtualization kvm. Oct 13 05:47:14.178794 systemd[1]: Detected architecture x86-64. Oct 13 05:47:14.178800 systemd[1]: Detected first boot. Oct 13 05:47:14.178807 systemd[1]: Hostname set to . Oct 13 05:47:14.178814 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:47:14.178820 zram_generator::config[1129]: No configuration found. Oct 13 05:47:14.178828 kernel: Guest personality initialized and is inactive Oct 13 05:47:14.178834 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:47:14.178842 kernel: Initialized host personality Oct 13 05:47:14.178849 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:47:14.178857 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:47:14.178865 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:47:14.178871 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:47:14.178878 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:47:14.178884 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:47:14.178891 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:47:14.178900 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:47:14.178907 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:47:14.178914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:47:14.178920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:47:14.178927 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:47:14.178934 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:47:14.178942 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:47:14.178948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:47:14.178955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:47:14.178962 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:47:14.178969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:47:14.178976 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:47:14.178984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:47:14.178991 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:47:14.178998 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:47:14.179004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:47:14.179011 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:47:14.179018 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:47:14.179024 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:47:14.179031 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:47:14.179040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:47:14.179048 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:47:14.179055 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:47:14.179062 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:47:14.179095 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:47:14.179103 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:47:14.179110 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:47:14.179116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:47:14.179123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:47:14.179129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:47:14.179136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:47:14.179144 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:47:14.179150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:47:14.179157 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:47:14.179164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:14.179170 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:47:14.179177 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:47:14.179184 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:47:14.179191 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:47:14.179198 systemd[1]: Reached target machines.target - Containers. Oct 13 05:47:14.179205 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:47:14.179212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:47:14.179219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:47:14.179225 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:47:14.179231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:47:14.179238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:47:14.179246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:47:14.179253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:47:14.179259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:47:14.179266 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:47:14.179273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:47:14.179286 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:47:14.179293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:47:14.179300 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:47:14.179307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:47:14.179315 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:47:14.179323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:47:14.179329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:47:14.179336 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:47:14.179343 kernel: ACPI: bus type drm_connector registered Oct 13 05:47:14.179350 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:47:14.179357 kernel: fuse: init (API version 7.41) Oct 13 05:47:14.179364 kernel: loop: module loaded Oct 13 05:47:14.179371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:47:14.179379 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:47:14.179387 systemd[1]: Stopped verity-setup.service. Oct 13 05:47:14.179393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:14.179400 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:47:14.179407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:47:14.179413 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:47:14.179420 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:47:14.179427 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:47:14.179446 systemd-journald[1212]: Collecting audit messages is disabled. Oct 13 05:47:14.179462 systemd-journald[1212]: Journal started Oct 13 05:47:14.179478 systemd-journald[1212]: Runtime Journal (/run/log/journal/dca0362ae341427c9ba526d9c6a289df) is 4.8M, max 38.6M, 33.7M free. Oct 13 05:47:13.939277 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:47:13.955191 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 05:47:13.955633 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:47:14.183734 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:47:14.183359 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:47:14.184012 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:47:14.184791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:47:14.185440 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:47:14.185601 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:47:14.186344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:47:14.186494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:47:14.187105 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:47:14.187248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:47:14.187828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:47:14.187959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:47:14.188570 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:47:14.188705 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:47:14.189455 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:47:14.189585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:47:14.190278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:47:14.190907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:47:14.191545 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:47:14.192344 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:47:14.199551 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:47:14.204124 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:47:14.205744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:47:14.206188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:47:14.206206 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:47:14.207431 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:47:14.212143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:47:14.213137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:47:14.213794 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:47:14.215826 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:47:14.219655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:47:14.222264 systemd-journald[1212]: Time spent on flushing to /var/log/journal/dca0362ae341427c9ba526d9c6a289df is 23.727ms for 1163 entries. Oct 13 05:47:14.222264 systemd-journald[1212]: System Journal (/var/log/journal/dca0362ae341427c9ba526d9c6a289df) is 8M, max 584.8M, 576.8M free. Oct 13 05:47:14.253166 systemd-journald[1212]: Received client request to flush runtime journal. Oct 13 05:47:14.222297 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:47:14.224793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:47:14.225715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:47:14.227187 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:47:14.229140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:47:14.230730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:47:14.231319 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:47:14.232219 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:47:14.257818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:47:14.269056 kernel: loop0: detected capacity change from 0 to 224512 Oct 13 05:47:14.265278 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:47:14.268321 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:47:14.272398 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:47:14.276717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:47:14.297088 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:47:14.299997 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:47:14.309872 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:47:14.316215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:47:14.321086 kernel: loop1: detected capacity change from 0 to 128016 Oct 13 05:47:14.338085 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Oct 13 05:47:14.338452 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Oct 13 05:47:14.343029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:47:14.358100 kernel: loop2: detected capacity change from 0 to 8 Oct 13 05:47:14.371245 kernel: loop3: detected capacity change from 0 to 110984 Oct 13 05:47:14.411106 kernel: loop4: detected capacity change from 0 to 224512 Oct 13 05:47:14.436167 kernel: loop5: detected capacity change from 0 to 128016 Oct 13 05:47:14.458324 kernel: loop6: detected capacity change from 0 to 8 Oct 13 05:47:14.461170 kernel: loop7: detected capacity change from 0 to 110984 Oct 13 05:47:14.479969 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 13 05:47:14.480774 (sd-merge)[1277]: Merged extensions into '/usr'. Oct 13 05:47:14.488100 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:47:14.488404 systemd[1]: Reloading... Oct 13 05:47:14.568154 zram_generator::config[1303]: No configuration found. Oct 13 05:47:14.640092 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:47:14.729884 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:47:14.730233 systemd[1]: Reloading finished in 241 ms. Oct 13 05:47:14.752440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:47:14.753407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:47:14.754118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:47:14.762050 systemd[1]: Starting ensure-sysext.service... Oct 13 05:47:14.765154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:47:14.767195 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:47:14.782240 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:47:14.782378 systemd[1]: Reloading... Oct 13 05:47:14.792212 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:47:14.792583 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:47:14.792841 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:47:14.793718 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:47:14.794640 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:47:14.796001 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Oct 13 05:47:14.797317 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Oct 13 05:47:14.803225 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:47:14.803237 systemd-tmpfiles[1348]: Skipping /boot Oct 13 05:47:14.807892 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:47:14.807916 systemd-tmpfiles[1348]: Skipping /boot Oct 13 05:47:14.817449 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Oct 13 05:47:14.838095 zram_generator::config[1379]: No configuration found. Oct 13 05:47:14.997089 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:47:15.009120 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 13 05:47:15.025088 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:47:15.041502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:47:15.041666 systemd[1]: Reloading finished in 259 ms. Oct 13 05:47:15.047629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:47:15.054974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:47:15.085410 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 13 05:47:15.090477 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:47:15.090706 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:47:15.090267 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 13 05:47:15.097675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:15.098627 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:47:15.100493 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:47:15.101179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:47:15.102369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:47:15.103426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:47:15.106130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:47:15.106629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:47:15.111375 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:47:15.111937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:47:15.112610 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:47:15.114619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:47:15.121230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:47:15.128368 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:47:15.131129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:15.132244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:47:15.132375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:47:15.133182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:47:15.133278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:47:15.134379 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:47:15.135104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:47:15.145332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:47:15.147648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:15.147803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:47:15.154257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:47:15.158573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:47:15.160205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:47:15.163404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:47:15.164373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:47:15.164408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:47:15.171664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:47:15.172108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:47:15.174102 systemd[1]: Finished ensure-sysext.service. Oct 13 05:47:15.177670 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:47:15.178457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:47:15.182057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:47:15.192289 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:47:15.193098 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:47:15.194152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:47:15.195536 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:47:15.196029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:47:15.197316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:47:15.197454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:47:15.198444 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:47:15.198676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:47:15.204298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:47:15.204473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:47:15.207005 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:47:15.222431 augenrules[1525]: No rules Oct 13 05:47:15.224696 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:47:15.225756 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:47:15.226109 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:47:15.227085 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 13 05:47:15.227838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:47:15.228873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:47:15.229105 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 13 05:47:15.237262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:47:15.241100 kernel: Console: switching to colour dummy device 80x25 Oct 13 05:47:15.243295 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 13 05:47:15.243327 kernel: [drm] features: -context_init Oct 13 05:47:15.246113 kernel: [drm] number of scanouts: 1 Oct 13 05:47:15.246134 kernel: [drm] number of cap sets: 0 Oct 13 05:47:15.252105 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Oct 13 05:47:15.258690 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 13 05:47:15.258768 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:47:15.258356 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:47:15.261089 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 13 05:47:15.293702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:47:15.293937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:15.298836 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:47:15.303740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:47:15.363421 systemd-resolved[1481]: Positive Trust Anchors: Oct 13 05:47:15.363733 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:47:15.363787 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:47:15.367813 systemd-resolved[1481]: Using system hostname 'ci-4459-1-0-c-7af444862e'. Oct 13 05:47:15.368273 systemd-networkd[1480]: lo: Link UP Oct 13 05:47:15.368292 systemd-networkd[1480]: lo: Gained carrier Oct 13 05:47:15.369445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:47:15.369555 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:47:15.369985 systemd-networkd[1480]: Enumeration completed Oct 13 05:47:15.370052 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:47:15.370974 systemd[1]: Reached target network.target - Network. Oct 13 05:47:15.371533 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:15.371579 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:47:15.373242 systemd-networkd[1480]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:15.373305 systemd-networkd[1480]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:47:15.373962 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:47:15.374270 systemd-networkd[1480]: eth0: Link UP Oct 13 05:47:15.374367 systemd-networkd[1480]: eth0: Gained carrier Oct 13 05:47:15.374377 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:15.377160 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:47:15.377425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:47:15.378726 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:47:15.378818 systemd-networkd[1480]: eth1: Link UP Oct 13 05:47:15.378836 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:47:15.378940 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:47:15.379003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:47:15.379828 systemd-networkd[1480]: eth1: Gained carrier Oct 13 05:47:15.380109 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:47:15.380171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:47:15.380221 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:47:15.380224 systemd-networkd[1480]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:47:15.380245 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:47:15.380290 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:47:15.380431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:47:15.380528 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:47:15.380585 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:47:15.382045 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:47:15.383655 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:47:15.387264 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:47:15.393214 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:47:15.393633 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:47:15.395375 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:47:15.398428 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:47:15.401245 systemd-networkd[1480]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 13 05:47:15.401873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:47:15.403478 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:47:15.407125 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:47:15.407566 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:47:15.407590 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:47:15.409383 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:47:15.410174 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 13 05:47:15.411806 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 05:47:15.419218 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:47:15.421873 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:47:15.427945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:47:15.431176 systemd-networkd[1480]: eth0: DHCPv4 address 65.108.221.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 13 05:47:15.432194 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:47:15.432966 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 13 05:47:15.436712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:47:15.438142 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:47:15.439407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:47:15.440334 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:47:15.445703 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 13 05:47:15.454240 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:47:15.460458 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:47:15.463844 jq[1564]: false Oct 13 05:47:15.464737 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:47:15.465575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:47:15.465851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:47:15.468531 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:47:15.469237 extend-filesystems[1565]: Found /dev/sda6 Oct 13 05:47:15.476218 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:47:15.479317 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:47:15.480731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:47:15.481244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:47:15.481356 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:47:15.483643 jq[1586]: true Oct 13 05:47:15.483837 coreos-metadata[1561]: Oct 13 05:47:15.483 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 13 05:47:15.487579 coreos-metadata[1561]: Oct 13 05:47:15.487 INFO Fetch successful Oct 13 05:47:15.487579 coreos-metadata[1561]: Oct 13 05:47:15.487 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 13 05:47:15.489125 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Oct 13 05:47:15.490446 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Oct 13 05:47:15.493304 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Oct 13 05:47:15.493304 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:47:15.493290 oslogin_cache_refresh[1566]: Failure getting users, quitting Oct 13 05:47:15.493379 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Oct 13 05:47:15.493307 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:47:15.493348 oslogin_cache_refresh[1566]: Refreshing group entry cache Oct 13 05:47:15.493957 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Oct 13 05:47:15.493957 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:47:15.493948 oslogin_cache_refresh[1566]: Failure getting groups, quitting Oct 13 05:47:15.493954 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:47:15.495423 extend-filesystems[1565]: Found /dev/sda9 Oct 13 05:47:15.503885 coreos-metadata[1561]: Oct 13 05:47:15.499 INFO Fetch successful Oct 13 05:47:15.497288 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:47:15.497410 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:47:15.504447 extend-filesystems[1565]: Checking size of /dev/sda9 Oct 13 05:47:15.505015 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:47:15.505179 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:47:15.516262 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:47:15.518648 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:47:15.529895 tar[1589]: linux-amd64/LICENSE Oct 13 05:47:15.529895 tar[1589]: linux-amd64/helm Oct 13 05:47:15.530420 extend-filesystems[1565]: Resized partition /dev/sda9 Oct 13 05:47:15.540462 update_engine[1581]: I20251013 05:47:15.527463 1581 main.cc:92] Flatcar Update Engine starting Oct 13 05:47:15.540608 jq[1592]: true Oct 13 05:47:15.540650 extend-filesystems[1615]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:47:15.553260 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 13 05:47:15.543389 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:47:15.554517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:47:15.554423 dbus-daemon[1562]: [system] SELinux support is enabled Oct 13 05:47:15.557480 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:47:15.557501 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:47:15.559549 update_engine[1581]: I20251013 05:47:15.559440 1581 update_check_scheduler.cc:74] Next update check in 11m22s Oct 13 05:47:15.561577 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:47:15.561592 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:47:15.566463 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:47:15.574319 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:47:15.610210 systemd-logind[1580]: New seat seat0. Oct 13 05:47:15.618854 systemd-logind[1580]: Watching system buttons on /dev/input/event3 (Power Button) Oct 13 05:47:15.618867 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:47:15.619516 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:47:15.665329 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:47:15.668061 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:47:15.672305 systemd[1]: Starting sshkeys.service... Oct 13 05:47:15.680644 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 05:47:15.683208 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:47:15.702666 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 13 05:47:15.710200 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 13 05:47:15.719087 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 13 05:47:15.747730 extend-filesystems[1615]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 13 05:47:15.747730 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 13 05:47:15.747730 extend-filesystems[1615]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 13 05:47:15.747473 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:47:15.767984 extend-filesystems[1565]: Resized filesystem in /dev/sda9 Oct 13 05:47:15.747615 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:47:15.795678 coreos-metadata[1648]: Oct 13 05:47:15.795 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 13 05:47:15.796507 coreos-metadata[1648]: Oct 13 05:47:15.796 INFO Fetch successful Oct 13 05:47:15.799774 unknown[1648]: wrote ssh authorized keys file for user: core Oct 13 05:47:15.826829 update-ssh-keys[1657]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:47:15.826811 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:47:15.827541 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 13 05:47:15.835714 systemd[1]: Finished sshkeys.service. Oct 13 05:47:15.852445 containerd[1605]: time="2025-10-13T05:47:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:47:15.853880 containerd[1605]: time="2025-10-13T05:47:15.853729496Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:47:15.866005 containerd[1605]: time="2025-10-13T05:47:15.865959891Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.08µs" Oct 13 05:47:15.866005 containerd[1605]: time="2025-10-13T05:47:15.865995391Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:47:15.866005 containerd[1605]: time="2025-10-13T05:47:15.866009831Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:47:15.872219 containerd[1605]: time="2025-10-13T05:47:15.872179279Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:47:15.872219 containerd[1605]: time="2025-10-13T05:47:15.872200729Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:47:15.872298 containerd[1605]: time="2025-10-13T05:47:15.872226609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872298 containerd[1605]: time="2025-10-13T05:47:15.872269189Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872298 containerd[1605]: time="2025-10-13T05:47:15.872275659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872477 containerd[1605]: time="2025-10-13T05:47:15.872457128Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872477 containerd[1605]: time="2025-10-13T05:47:15.872472768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872506 containerd[1605]: time="2025-10-13T05:47:15.872481838Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872506 containerd[1605]: time="2025-10-13T05:47:15.872486998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872537 containerd[1605]: time="2025-10-13T05:47:15.872530748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872664 containerd[1605]: time="2025-10-13T05:47:15.872648158Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872683 containerd[1605]: time="2025-10-13T05:47:15.872668268Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:47:15.872683 containerd[1605]: time="2025-10-13T05:47:15.872675008Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:47:15.872708 containerd[1605]: time="2025-10-13T05:47:15.872694658Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:47:15.872849 containerd[1605]: time="2025-10-13T05:47:15.872833498Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:47:15.872970 containerd[1605]: time="2025-10-13T05:47:15.872871788Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:47:15.885373 containerd[1605]: time="2025-10-13T05:47:15.885342893Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885383003Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885393243Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885400653Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885409333Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885415933Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885425233Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885432053Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885438483Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885444813Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885450653Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885460383Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885536853Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885548083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:47:15.885555 containerd[1605]: time="2025-10-13T05:47:15.885557553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885567553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885575073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885581603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885588623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885595273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885601703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885608003Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885614523Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885658993Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885667473Z" level=info msg="Start snapshots syncer" Oct 13 05:47:15.885731 containerd[1605]: time="2025-10-13T05:47:15.885684963Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:47:15.886366 containerd[1605]: time="2025-10-13T05:47:15.885839503Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:47:15.886366 containerd[1605]: time="2025-10-13T05:47:15.885872873Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.885917473Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.885976463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.885987883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.885995793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886003203Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886010713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886017203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886024113Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886037493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886043983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886056883Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886099833Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886108843Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:47:15.886655 containerd[1605]: time="2025-10-13T05:47:15.886115173Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886121263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886126593Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886132903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886141183Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886151563Z" level=info msg="runtime interface created" Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886155763Z" level=info msg="created NRI interface" Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886168173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886177153Z" level=info msg="Connect containerd service" Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886190753Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:47:15.886812 containerd[1605]: time="2025-10-13T05:47:15.886639643Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:47:15.990154 containerd[1605]: time="2025-10-13T05:47:15.990122169Z" level=info msg="Start subscribing containerd event" Oct 13 05:47:15.990154 containerd[1605]: time="2025-10-13T05:47:15.990202319Z" level=info msg="Start recovering state" Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990302649Z" level=info msg="Start event monitor" Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990686309Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990693709Z" level=info msg="Start streaming server" Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990708149Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990719089Z" level=info msg="runtime interface starting up..." Oct 13 05:47:15.990943 containerd[1605]: time="2025-10-13T05:47:15.990724709Z" level=info msg="starting plugins..." Oct 13 05:47:15.991173 containerd[1605]: time="2025-10-13T05:47:15.990562419Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:47:15.991173 containerd[1605]: time="2025-10-13T05:47:15.991115549Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:47:15.992466 containerd[1605]: time="2025-10-13T05:47:15.992397008Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:47:15.992930 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:47:15.993159 containerd[1605]: time="2025-10-13T05:47:15.992958438Z" level=info msg="containerd successfully booted in 0.141069s" Oct 13 05:47:16.022865 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:47:16.037127 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:47:16.041355 tar[1589]: linux-amd64/README.md Oct 13 05:47:16.041988 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:47:16.051707 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:47:16.054735 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:47:16.054930 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:47:16.057308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:47:16.073764 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:47:16.078975 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:47:16.081228 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:47:16.082638 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:47:17.219243 systemd-networkd[1480]: eth1: Gained IPv6LL Oct 13 05:47:17.219647 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 13 05:47:17.221428 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:47:17.225507 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:47:17.231035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:17.236184 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:47:17.270153 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:47:17.283380 systemd-networkd[1480]: eth0: Gained IPv6LL Oct 13 05:47:17.283769 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 13 05:47:18.004873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:18.005856 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:47:18.010297 systemd[1]: Startup finished in 2.474s (kernel) + 4.949s (initrd) + 4.494s (userspace) = 11.919s. Oct 13 05:47:18.015460 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:47:18.487023 kubelet[1712]: E1013 05:47:18.486874 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:47:18.490099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:47:18.490253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:47:18.490822 systemd[1]: kubelet.service: Consumed 764ms CPU time, 264.8M memory peak. Oct 13 05:47:18.809092 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:47:18.809859 systemd[1]: Started sshd@0-65.108.221.100:22-147.75.109.163:60924.service - OpenSSH per-connection server daemon (147.75.109.163:60924). Oct 13 05:47:19.829113 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 60924 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:19.830543 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:19.844268 systemd-logind[1580]: New session 1 of user core. Oct 13 05:47:19.845723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:47:19.847346 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:47:19.864855 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:47:19.866222 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:47:19.875828 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:47:19.877396 systemd-logind[1580]: New session c1 of user core. Oct 13 05:47:19.995288 systemd[1730]: Queued start job for default target default.target. Oct 13 05:47:20.007211 systemd[1730]: Created slice app.slice - User Application Slice. Oct 13 05:47:20.007241 systemd[1730]: Reached target paths.target - Paths. Oct 13 05:47:20.007289 systemd[1730]: Reached target timers.target - Timers. Oct 13 05:47:20.008401 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:47:20.017717 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:47:20.017771 systemd[1730]: Reached target sockets.target - Sockets. Oct 13 05:47:20.017808 systemd[1730]: Reached target basic.target - Basic System. Oct 13 05:47:20.017832 systemd[1730]: Reached target default.target - Main User Target. Oct 13 05:47:20.017852 systemd[1730]: Startup finished in 136ms. Oct 13 05:47:20.017915 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:47:20.019367 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:47:20.754126 systemd[1]: Started sshd@1-65.108.221.100:22-147.75.109.163:60940.service - OpenSSH per-connection server daemon (147.75.109.163:60940). Oct 13 05:47:21.872909 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 60940 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:21.873994 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:21.877988 systemd-logind[1580]: New session 2 of user core. Oct 13 05:47:21.882179 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:47:22.634090 sshd[1744]: Connection closed by 147.75.109.163 port 60940 Oct 13 05:47:22.634527 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:22.637039 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:47:22.637249 systemd[1]: sshd@1-65.108.221.100:22-147.75.109.163:60940.service: Deactivated successfully. Oct 13 05:47:22.638085 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:47:22.638990 systemd-logind[1580]: Removed session 2. Oct 13 05:47:22.824139 systemd[1]: Started sshd@2-65.108.221.100:22-147.75.109.163:54174.service - OpenSSH per-connection server daemon (147.75.109.163:54174). Oct 13 05:47:23.963547 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 54174 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:23.965256 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:23.970642 systemd-logind[1580]: New session 3 of user core. Oct 13 05:47:23.979414 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:47:24.720056 sshd[1753]: Connection closed by 147.75.109.163 port 54174 Oct 13 05:47:24.720916 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:24.726151 systemd[1]: sshd@2-65.108.221.100:22-147.75.109.163:54174.service: Deactivated successfully. Oct 13 05:47:24.728579 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:47:24.729734 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:47:24.732428 systemd-logind[1580]: Removed session 3. Oct 13 05:47:24.879895 systemd[1]: Started sshd@3-65.108.221.100:22-147.75.109.163:54182.service - OpenSSH per-connection server daemon (147.75.109.163:54182). Oct 13 05:47:25.921699 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 54182 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:25.922751 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:25.929035 systemd-logind[1580]: New session 4 of user core. Oct 13 05:47:25.935293 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:47:26.624602 sshd[1762]: Connection closed by 147.75.109.163 port 54182 Oct 13 05:47:26.625605 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:26.630272 systemd[1]: sshd@3-65.108.221.100:22-147.75.109.163:54182.service: Deactivated successfully. Oct 13 05:47:26.633637 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:47:26.634956 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:47:26.637191 systemd-logind[1580]: Removed session 4. Oct 13 05:47:26.844692 systemd[1]: Started sshd@4-65.108.221.100:22-147.75.109.163:54194.service - OpenSSH per-connection server daemon (147.75.109.163:54194). Oct 13 05:47:27.966800 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 54194 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:27.968686 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:27.975980 systemd-logind[1580]: New session 5 of user core. Oct 13 05:47:27.982288 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:47:28.566423 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:47:28.566739 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:47:28.568161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:47:28.571246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:28.584983 sudo[1772]: pam_unix(sudo:session): session closed for user root Oct 13 05:47:28.689402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:28.691564 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:47:28.732356 kubelet[1781]: E1013 05:47:28.732253 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:47:28.734845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:47:28.735041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:47:28.735758 systemd[1]: kubelet.service: Consumed 123ms CPU time, 108.8M memory peak. Oct 13 05:47:28.766570 sshd[1771]: Connection closed by 147.75.109.163 port 54194 Oct 13 05:47:28.767407 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:28.770382 systemd[1]: sshd@4-65.108.221.100:22-147.75.109.163:54194.service: Deactivated successfully. Oct 13 05:47:28.772258 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:47:28.773513 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:47:28.775011 systemd-logind[1580]: Removed session 5. Oct 13 05:47:28.928354 systemd[1]: Started sshd@5-65.108.221.100:22-147.75.109.163:54202.service - OpenSSH per-connection server daemon (147.75.109.163:54202). Oct 13 05:47:29.964330 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 54202 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:29.965914 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:29.971430 systemd-logind[1580]: New session 6 of user core. Oct 13 05:47:29.976199 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:47:30.499303 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:47:30.499611 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:47:30.505581 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 13 05:47:30.512690 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:47:30.513044 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:47:30.526250 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:47:30.571884 augenrules[1821]: No rules Oct 13 05:47:30.572770 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:47:30.573009 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:47:30.574002 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 13 05:47:30.737087 sshd[1797]: Connection closed by 147.75.109.163 port 54202 Oct 13 05:47:30.737577 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:30.740113 systemd[1]: sshd@5-65.108.221.100:22-147.75.109.163:54202.service: Deactivated successfully. Oct 13 05:47:30.741388 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:47:30.742587 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:47:30.743387 systemd-logind[1580]: Removed session 6. Oct 13 05:47:30.923963 systemd[1]: Started sshd@6-65.108.221.100:22-147.75.109.163:54204.service - OpenSSH per-connection server daemon (147.75.109.163:54204). Oct 13 05:47:31.942843 sshd[1830]: Accepted publickey for core from 147.75.109.163 port 54204 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:47:31.944055 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:31.949837 systemd-logind[1580]: New session 7 of user core. Oct 13 05:47:31.959241 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:47:32.479513 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:47:32.479994 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:47:32.866394 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:47:32.875254 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:47:33.089015 dockerd[1852]: time="2025-10-13T05:47:33.088760005Z" level=info msg="Starting up" Oct 13 05:47:33.089730 dockerd[1852]: time="2025-10-13T05:47:33.089701754Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:47:33.097740 dockerd[1852]: time="2025-10-13T05:47:33.097709751Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:47:33.110404 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1319770978-merged.mount: Deactivated successfully. Oct 13 05:47:33.126318 systemd[1]: var-lib-docker-metacopy\x2dcheck874128955-merged.mount: Deactivated successfully. Oct 13 05:47:33.138086 dockerd[1852]: time="2025-10-13T05:47:33.137937444Z" level=info msg="Loading containers: start." Oct 13 05:47:33.146092 kernel: Initializing XFRM netlink socket Oct 13 05:47:33.291571 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Oct 13 05:47:33.332108 systemd-networkd[1480]: docker0: Link UP Oct 13 05:47:33.340451 dockerd[1852]: time="2025-10-13T05:47:33.340390950Z" level=info msg="Loading containers: done." Oct 13 05:47:33.352936 dockerd[1852]: time="2025-10-13T05:47:33.352894145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:47:33.353023 dockerd[1852]: time="2025-10-13T05:47:33.352980965Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:47:33.353111 dockerd[1852]: time="2025-10-13T05:47:33.353066615Z" level=info msg="Initializing buildkit" Oct 13 05:47:33.373989 dockerd[1852]: time="2025-10-13T05:47:33.373952956Z" level=info msg="Completed buildkit initialization" Oct 13 05:47:33.380770 dockerd[1852]: time="2025-10-13T05:47:33.380580933Z" level=info msg="Daemon has completed initialization" Oct 13 05:47:33.380770 dockerd[1852]: time="2025-10-13T05:47:33.380670403Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:47:33.381151 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:47:34.328871 systemd-timesyncd[1516]: Contacted time server 85.216.100.221:123 (2.flatcar.pool.ntp.org). Oct 13 05:47:34.329382 systemd-resolved[1481]: Clock change detected. Flushing caches. Oct 13 05:47:34.330468 systemd-timesyncd[1516]: Initial clock synchronization to Mon 2025-10-13 05:47:34.328497 UTC. Oct 13 05:47:35.346985 containerd[1605]: time="2025-10-13T05:47:35.346894834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 05:47:35.909408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371979973.mount: Deactivated successfully. Oct 13 05:47:37.019099 containerd[1605]: time="2025-10-13T05:47:37.019050818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:37.019864 containerd[1605]: time="2025-10-13T05:47:37.019770607Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Oct 13 05:47:37.020876 containerd[1605]: time="2025-10-13T05:47:37.020854687Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:37.022874 containerd[1605]: time="2025-10-13T05:47:37.022857796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:37.023654 containerd[1605]: time="2025-10-13T05:47:37.023346846Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.676401252s" Oct 13 05:47:37.023654 containerd[1605]: time="2025-10-13T05:47:37.023389256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 13 05:47:37.023880 containerd[1605]: time="2025-10-13T05:47:37.023867696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 05:47:38.245732 containerd[1605]: time="2025-10-13T05:47:38.245681467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:38.246896 containerd[1605]: time="2025-10-13T05:47:38.246669656Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Oct 13 05:47:38.247837 containerd[1605]: time="2025-10-13T05:47:38.247818126Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:38.249939 containerd[1605]: time="2025-10-13T05:47:38.249913185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:38.250692 containerd[1605]: time="2025-10-13T05:47:38.250666484Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.226730958s" Oct 13 05:47:38.250728 containerd[1605]: time="2025-10-13T05:47:38.250696754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 13 05:47:38.251358 containerd[1605]: time="2025-10-13T05:47:38.251332034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 05:47:39.317613 containerd[1605]: time="2025-10-13T05:47:39.317541510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:39.318765 containerd[1605]: time="2025-10-13T05:47:39.318615549Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Oct 13 05:47:39.319552 containerd[1605]: time="2025-10-13T05:47:39.319519859Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:39.321852 containerd[1605]: time="2025-10-13T05:47:39.321820328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:39.322597 containerd[1605]: time="2025-10-13T05:47:39.322565298Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.071210954s" Oct 13 05:47:39.322637 containerd[1605]: time="2025-10-13T05:47:39.322601228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 13 05:47:39.323263 containerd[1605]: time="2025-10-13T05:47:39.323170458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 05:47:39.826949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:47:39.829409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:39.937883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:39.947492 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:47:40.001746 kubelet[2135]: E1013 05:47:40.001681 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:47:40.003634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:47:40.003752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:47:40.004452 systemd[1]: kubelet.service: Consumed 134ms CPU time, 109M memory peak. Oct 13 05:47:40.296170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931881527.mount: Deactivated successfully. Oct 13 05:47:40.557950 containerd[1605]: time="2025-10-13T05:47:40.557823303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:40.559094 containerd[1605]: time="2025-10-13T05:47:40.558982323Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Oct 13 05:47:40.560148 containerd[1605]: time="2025-10-13T05:47:40.560125802Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:40.561529 containerd[1605]: time="2025-10-13T05:47:40.561505892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:40.561979 containerd[1605]: time="2025-10-13T05:47:40.561954181Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.238687513s" Oct 13 05:47:40.562029 containerd[1605]: time="2025-10-13T05:47:40.562020761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 13 05:47:40.562492 containerd[1605]: time="2025-10-13T05:47:40.562475891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 05:47:41.099866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885060468.mount: Deactivated successfully. Oct 13 05:47:42.304025 containerd[1605]: time="2025-10-13T05:47:42.303975776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:42.304950 containerd[1605]: time="2025-10-13T05:47:42.304789495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Oct 13 05:47:42.305738 containerd[1605]: time="2025-10-13T05:47:42.305722175Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:42.307446 containerd[1605]: time="2025-10-13T05:47:42.307428154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:42.307956 containerd[1605]: time="2025-10-13T05:47:42.307939644Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.745398023s" Oct 13 05:47:42.308003 containerd[1605]: time="2025-10-13T05:47:42.307996374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 13 05:47:42.308428 containerd[1605]: time="2025-10-13T05:47:42.308401184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:47:42.779299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636897814.mount: Deactivated successfully. Oct 13 05:47:42.788429 containerd[1605]: time="2025-10-13T05:47:42.788262004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:47:42.789582 containerd[1605]: time="2025-10-13T05:47:42.789233393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Oct 13 05:47:42.790872 containerd[1605]: time="2025-10-13T05:47:42.790807663Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:47:42.793769 containerd[1605]: time="2025-10-13T05:47:42.793720951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:47:42.794841 containerd[1605]: time="2025-10-13T05:47:42.794801111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.374067ms" Oct 13 05:47:42.795028 containerd[1605]: time="2025-10-13T05:47:42.794932301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:47:42.795962 containerd[1605]: time="2025-10-13T05:47:42.795462701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 05:47:43.379650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164801023.mount: Deactivated successfully. Oct 13 05:47:45.358928 containerd[1605]: time="2025-10-13T05:47:45.358882223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:45.359839 containerd[1605]: time="2025-10-13T05:47:45.359727232Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Oct 13 05:47:45.360815 containerd[1605]: time="2025-10-13T05:47:45.360795702Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:45.367348 containerd[1605]: time="2025-10-13T05:47:45.367317189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:47:45.368184 containerd[1605]: time="2025-10-13T05:47:45.368057779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.572570118s" Oct 13 05:47:45.368184 containerd[1605]: time="2025-10-13T05:47:45.368097539Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 13 05:47:47.621123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:47.621366 systemd[1]: kubelet.service: Consumed 134ms CPU time, 109M memory peak. Oct 13 05:47:47.624348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:47.663727 systemd[1]: Reload requested from client PID 2287 ('systemctl') (unit session-7.scope)... Oct 13 05:47:47.663925 systemd[1]: Reloading... Oct 13 05:47:47.742107 zram_generator::config[2334]: No configuration found. Oct 13 05:47:47.867223 systemd[1]: Reloading finished in 202 ms. Oct 13 05:47:47.912117 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:47:47.912171 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:47:47.912358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:47.912504 systemd[1]: kubelet.service: Consumed 65ms CPU time, 98M memory peak. Oct 13 05:47:47.913967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:48.040684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:48.047269 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:47:48.075170 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:47:48.075599 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:47:48.075599 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:47:48.079087 kubelet[2385]: I1013 05:47:48.078207 2385 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:47:48.333763 kubelet[2385]: I1013 05:47:48.333307 2385 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:47:48.333763 kubelet[2385]: I1013 05:47:48.333329 2385 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:47:48.333763 kubelet[2385]: I1013 05:47:48.333511 2385 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:47:48.372540 kubelet[2385]: E1013 05:47:48.372479 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.108.221.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:48.373214 kubelet[2385]: I1013 05:47:48.373185 2385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:47:48.384186 kubelet[2385]: I1013 05:47:48.384132 2385 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:47:48.387697 kubelet[2385]: I1013 05:47:48.387667 2385 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:47:48.389006 kubelet[2385]: I1013 05:47:48.388947 2385 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:47:48.389161 kubelet[2385]: I1013 05:47:48.388972 2385 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-c-7af444862e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:47:48.390407 kubelet[2385]: I1013 05:47:48.390343 2385 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:47:48.390407 kubelet[2385]: I1013 05:47:48.390357 2385 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:47:48.391404 kubelet[2385]: I1013 05:47:48.391362 2385 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:47:48.394396 kubelet[2385]: I1013 05:47:48.394354 2385 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:47:48.394396 kubelet[2385]: I1013 05:47:48.394375 2385 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:47:48.394396 kubelet[2385]: I1013 05:47:48.394390 2385 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:47:48.394396 kubelet[2385]: I1013 05:47:48.394398 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:47:48.404696 kubelet[2385]: W1013 05:47:48.404637 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.108.221.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.108.221.100:6443: connect: connection refused Oct 13 05:47:48.404899 kubelet[2385]: E1013 05:47:48.404877 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.108.221.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:48.405108 kubelet[2385]: W1013 05:47:48.405044 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.108.221.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-c-7af444862e&limit=500&resourceVersion=0": dial tcp 65.108.221.100:6443: connect: connection refused Oct 13 05:47:48.405216 kubelet[2385]: E1013 05:47:48.405198 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.108.221.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-c-7af444862e&limit=500&resourceVersion=0\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:48.406237 kubelet[2385]: I1013 05:47:48.406205 2385 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:47:48.408636 kubelet[2385]: I1013 05:47:48.408620 2385 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:47:48.408694 kubelet[2385]: W1013 05:47:48.408662 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:47:48.409053 kubelet[2385]: I1013 05:47:48.409034 2385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:47:48.409053 kubelet[2385]: I1013 05:47:48.409056 2385 server.go:1287] "Started kubelet" Oct 13 05:47:48.410090 kubelet[2385]: I1013 05:47:48.409794 2385 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:47:48.410814 kubelet[2385]: I1013 05:47:48.410394 2385 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:47:48.412703 kubelet[2385]: I1013 05:47:48.412693 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:47:48.415617 kubelet[2385]: I1013 05:47:48.415575 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:47:48.415730 kubelet[2385]: I1013 05:47:48.415717 2385 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:47:48.417970 kubelet[2385]: E1013 05:47:48.416561 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.108.221.100:6443/api/v1/namespaces/default/events\": dial tcp 65.108.221.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-c-7af444862e.186df6df5365ab1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-c-7af444862e,UID:ci-4459-1-0-c-7af444862e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-c-7af444862e,},FirstTimestamp:2025-10-13 05:47:48.409043742 +0000 UTC m=+0.359251592,LastTimestamp:2025-10-13 05:47:48.409043742 +0000 UTC m=+0.359251592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-c-7af444862e,}" Oct 13 05:47:48.418781 kubelet[2385]: I1013 05:47:48.418217 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:47:48.419624 kubelet[2385]: I1013 05:47:48.419611 2385 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:47:48.420442 kubelet[2385]: E1013 05:47:48.419778 2385 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-c-7af444862e\" not found" Oct 13 05:47:48.420442 kubelet[2385]: I1013 05:47:48.419833 2385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:47:48.420442 kubelet[2385]: I1013 05:47:48.419861 2385 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:47:48.420442 kubelet[2385]: W1013 05:47:48.420136 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.108.221.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.108.221.100:6443: connect: connection refused Oct 13 05:47:48.420442 kubelet[2385]: E1013 05:47:48.420172 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.108.221.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:48.420442 kubelet[2385]: E1013 05:47:48.420230 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.221.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-7af444862e?timeout=10s\": dial tcp 65.108.221.100:6443: connect: connection refused" interval="200ms" Oct 13 05:47:48.428476 kubelet[2385]: I1013 05:47:48.428442 2385 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:47:48.428584 kubelet[2385]: I1013 05:47:48.428531 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:47:48.430183 kubelet[2385]: E1013 05:47:48.430162 2385 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:47:48.432535 kubelet[2385]: I1013 05:47:48.432516 2385 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:47:48.442103 kubelet[2385]: I1013 05:47:48.442049 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:47:48.443504 kubelet[2385]: I1013 05:47:48.443212 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:47:48.443504 kubelet[2385]: I1013 05:47:48.443231 2385 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:47:48.443504 kubelet[2385]: I1013 05:47:48.443249 2385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:47:48.443504 kubelet[2385]: I1013 05:47:48.443255 2385 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:47:48.443504 kubelet[2385]: E1013 05:47:48.443310 2385 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:47:48.448874 kubelet[2385]: W1013 05:47:48.448848 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.108.221.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.108.221.100:6443: connect: connection refused Oct 13 05:47:48.448942 kubelet[2385]: E1013 05:47:48.448888 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.108.221.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:48.449566 kubelet[2385]: I1013 05:47:48.449554 2385 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:47:48.449626 kubelet[2385]: I1013 05:47:48.449620 2385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:47:48.449723 kubelet[2385]: I1013 05:47:48.449716 2385 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:47:48.451537 kubelet[2385]: I1013 05:47:48.451528 2385 policy_none.go:49] "None policy: Start" Oct 13 05:47:48.451586 kubelet[2385]: I1013 05:47:48.451581 2385 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:47:48.451617 kubelet[2385]: I1013 05:47:48.451613 2385 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:47:48.455762 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:47:48.466919 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:47:48.469431 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:47:48.479850 kubelet[2385]: I1013 05:47:48.479217 2385 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:47:48.479850 kubelet[2385]: I1013 05:47:48.479440 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:47:48.479850 kubelet[2385]: I1013 05:47:48.479452 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:47:48.481683 kubelet[2385]: E1013 05:47:48.481650 2385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:47:48.481835 kubelet[2385]: E1013 05:47:48.481829 2385 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-0-c-7af444862e\" not found" Oct 13 05:47:48.481969 kubelet[2385]: I1013 05:47:48.481877 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:47:48.558339 systemd[1]: Created slice kubepods-burstable-podf6678e7e2584f996563ab0ae51547246.slice - libcontainer container kubepods-burstable-podf6678e7e2584f996563ab0ae51547246.slice. Oct 13 05:47:48.576702 kubelet[2385]: E1013 05:47:48.576608 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.582852 kubelet[2385]: I1013 05:47:48.582785 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.584534 kubelet[2385]: E1013 05:47:48.584201 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.221.100:6443/api/v1/nodes\": dial tcp 65.108.221.100:6443: connect: connection refused" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.586530 systemd[1]: Created slice kubepods-burstable-pod814cc7080cab0ecf7271f88b8d6ce053.slice - libcontainer container kubepods-burstable-pod814cc7080cab0ecf7271f88b8d6ce053.slice. Oct 13 05:47:48.598871 kubelet[2385]: E1013 05:47:48.598822 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.602731 systemd[1]: Created slice kubepods-burstable-pod2d0045259c8c18ddb7d803b33e96c3cd.slice - libcontainer container kubepods-burstable-pod2d0045259c8c18ddb7d803b33e96c3cd.slice. Oct 13 05:47:48.606007 kubelet[2385]: E1013 05:47:48.605948 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.620977 kubelet[2385]: E1013 05:47:48.620925 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.221.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-7af444862e?timeout=10s\": dial tcp 65.108.221.100:6443: connect: connection refused" interval="400ms" Oct 13 05:47:48.720577 kubelet[2385]: I1013 05:47:48.720495 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720577 kubelet[2385]: I1013 05:47:48.720569 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720842 kubelet[2385]: I1013 05:47:48.720607 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720842 kubelet[2385]: I1013 05:47:48.720630 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720842 kubelet[2385]: I1013 05:47:48.720651 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720842 kubelet[2385]: I1013 05:47:48.720704 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.720842 kubelet[2385]: I1013 05:47:48.720727 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d0045259c8c18ddb7d803b33e96c3cd-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-c-7af444862e\" (UID: \"2d0045259c8c18ddb7d803b33e96c3cd\") " pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.721007 kubelet[2385]: I1013 05:47:48.720764 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.721007 kubelet[2385]: I1013 05:47:48.720787 2385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.787110 kubelet[2385]: I1013 05:47:48.786791 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.787504 kubelet[2385]: E1013 05:47:48.787425 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.221.100:6443/api/v1/nodes\": dial tcp 65.108.221.100:6443: connect: connection refused" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:48.879934 containerd[1605]: time="2025-10-13T05:47:48.879789856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-c-7af444862e,Uid:f6678e7e2584f996563ab0ae51547246,Namespace:kube-system,Attempt:0,}" Oct 13 05:47:48.900564 containerd[1605]: time="2025-10-13T05:47:48.900348157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-c-7af444862e,Uid:814cc7080cab0ecf7271f88b8d6ce053,Namespace:kube-system,Attempt:0,}" Oct 13 05:47:48.912004 containerd[1605]: time="2025-10-13T05:47:48.911942252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-c-7af444862e,Uid:2d0045259c8c18ddb7d803b33e96c3cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:47:49.007378 containerd[1605]: time="2025-10-13T05:47:49.006390693Z" level=info msg="connecting to shim 3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee" address="unix:///run/containerd/s/50a224a316bd19690934ca8e2697fde4a5e359290a52b7120e8afa6a017c8332" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:47:49.009150 containerd[1605]: time="2025-10-13T05:47:49.006518673Z" level=info msg="connecting to shim 6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835" address="unix:///run/containerd/s/3be7e827506c4dddb25c048be8aa93ea047c7dfcebc715249834246dea245df1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:47:49.015672 containerd[1605]: time="2025-10-13T05:47:49.015648759Z" level=info msg="connecting to shim 35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11" address="unix:///run/containerd/s/eee079ae0adfb975cd1419b6b9317db904ea26e10692549ad0dc28198b4b4de9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:47:49.022859 kubelet[2385]: E1013 05:47:49.022831 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.221.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-7af444862e?timeout=10s\": dial tcp 65.108.221.100:6443: connect: connection refused" interval="800ms" Oct 13 05:47:49.098187 systemd[1]: Started cri-containerd-35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11.scope - libcontainer container 35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11. Oct 13 05:47:49.101852 systemd[1]: Started cri-containerd-3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee.scope - libcontainer container 3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee. Oct 13 05:47:49.103742 systemd[1]: Started cri-containerd-6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835.scope - libcontainer container 6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835. Oct 13 05:47:49.147829 containerd[1605]: time="2025-10-13T05:47:49.147738264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-c-7af444862e,Uid:f6678e7e2584f996563ab0ae51547246,Namespace:kube-system,Attempt:0,} returns sandbox id \"6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835\"" Oct 13 05:47:49.148473 containerd[1605]: time="2025-10-13T05:47:49.148452984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-c-7af444862e,Uid:2d0045259c8c18ddb7d803b33e96c3cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee\"" Oct 13 05:47:49.151704 containerd[1605]: time="2025-10-13T05:47:49.151690762Z" level=info msg="CreateContainer within sandbox \"6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:47:49.152046 containerd[1605]: time="2025-10-13T05:47:49.151902942Z" level=info msg="CreateContainer within sandbox \"3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:47:49.160647 containerd[1605]: time="2025-10-13T05:47:49.160493129Z" level=info msg="Container 2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:47:49.164920 containerd[1605]: time="2025-10-13T05:47:49.164907327Z" level=info msg="Container a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:47:49.168939 containerd[1605]: time="2025-10-13T05:47:49.168925675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-c-7af444862e,Uid:814cc7080cab0ecf7271f88b8d6ce053,Namespace:kube-system,Attempt:0,} returns sandbox id \"35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11\"" Oct 13 05:47:49.170549 containerd[1605]: time="2025-10-13T05:47:49.170526024Z" level=info msg="CreateContainer within sandbox \"35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:47:49.174552 containerd[1605]: time="2025-10-13T05:47:49.174532003Z" level=info msg="CreateContainer within sandbox \"6653de33ff19b6c131838cd666e31199b511f96f84923a350004bd4ec8f2f835\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf\"" Oct 13 05:47:49.175558 containerd[1605]: time="2025-10-13T05:47:49.175386592Z" level=info msg="CreateContainer within sandbox \"3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\"" Oct 13 05:47:49.175678 containerd[1605]: time="2025-10-13T05:47:49.175650892Z" level=info msg="StartContainer for \"a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf\"" Oct 13 05:47:49.176090 containerd[1605]: time="2025-10-13T05:47:49.175868732Z" level=info msg="StartContainer for \"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\"" Oct 13 05:47:49.176435 containerd[1605]: time="2025-10-13T05:47:49.176413992Z" level=info msg="connecting to shim a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf" address="unix:///run/containerd/s/3be7e827506c4dddb25c048be8aa93ea047c7dfcebc715249834246dea245df1" protocol=ttrpc version=3 Oct 13 05:47:49.177115 containerd[1605]: time="2025-10-13T05:47:49.177091502Z" level=info msg="connecting to shim 2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3" address="unix:///run/containerd/s/50a224a316bd19690934ca8e2697fde4a5e359290a52b7120e8afa6a017c8332" protocol=ttrpc version=3 Oct 13 05:47:49.181575 containerd[1605]: time="2025-10-13T05:47:49.181548830Z" level=info msg="Container 41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:47:49.189485 kubelet[2385]: I1013 05:47:49.189470 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:49.190522 kubelet[2385]: E1013 05:47:49.190389 2385 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.221.100:6443/api/v1/nodes\": dial tcp 65.108.221.100:6443: connect: connection refused" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:49.192287 containerd[1605]: time="2025-10-13T05:47:49.192247385Z" level=info msg="CreateContainer within sandbox \"35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\"" Oct 13 05:47:49.193084 containerd[1605]: time="2025-10-13T05:47:49.192583685Z" level=info msg="StartContainer for \"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\"" Oct 13 05:47:49.193313 containerd[1605]: time="2025-10-13T05:47:49.193291795Z" level=info msg="connecting to shim 41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727" address="unix:///run/containerd/s/eee079ae0adfb975cd1419b6b9317db904ea26e10692549ad0dc28198b4b4de9" protocol=ttrpc version=3 Oct 13 05:47:49.198278 systemd[1]: Started cri-containerd-2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3.scope - libcontainer container 2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3. Oct 13 05:47:49.199326 systemd[1]: Started cri-containerd-a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf.scope - libcontainer container a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf. Oct 13 05:47:49.207418 systemd[1]: Started cri-containerd-41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727.scope - libcontainer container 41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727. Oct 13 05:47:49.247682 containerd[1605]: time="2025-10-13T05:47:49.247645902Z" level=info msg="StartContainer for \"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\" returns successfully" Oct 13 05:47:49.276277 containerd[1605]: time="2025-10-13T05:47:49.275920311Z" level=info msg="StartContainer for \"a0b611a13def1bbda95ad7140a0c762c4f88becb8a4a6a1d80b4f633db1237bf\" returns successfully" Oct 13 05:47:49.277284 containerd[1605]: time="2025-10-13T05:47:49.277248780Z" level=info msg="StartContainer for \"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\" returns successfully" Oct 13 05:47:49.292090 kubelet[2385]: W1013 05:47:49.291989 2385 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.108.221.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.108.221.100:6443: connect: connection refused Oct 13 05:47:49.292295 kubelet[2385]: E1013 05:47:49.292256 2385 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.108.221.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.221.100:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:47:49.456380 kubelet[2385]: E1013 05:47:49.456018 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:49.459013 kubelet[2385]: E1013 05:47:49.458988 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:49.460417 kubelet[2385]: E1013 05:47:49.460402 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:49.992689 kubelet[2385]: I1013 05:47:49.992465 2385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.471890 kubelet[2385]: E1013 05:47:50.471410 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.471890 kubelet[2385]: E1013 05:47:50.471810 2385 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.602875 kubelet[2385]: E1013 05:47:50.602832 2385 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-0-c-7af444862e\" not found" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.662953 kubelet[2385]: I1013 05:47:50.662762 2385 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.721155 kubelet[2385]: I1013 05:47:50.721121 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.727968 kubelet[2385]: E1013 05:47:50.727491 2385 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.727968 kubelet[2385]: I1013 05:47:50.727529 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.729454 kubelet[2385]: E1013 05:47:50.729341 2385 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-c-7af444862e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.729454 kubelet[2385]: I1013 05:47:50.729352 2385 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:50.730153 kubelet[2385]: E1013 05:47:50.730132 2385 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-c-7af444862e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:51.397996 kubelet[2385]: I1013 05:47:51.397952 2385 apiserver.go:52] "Watching apiserver" Oct 13 05:47:51.420807 kubelet[2385]: I1013 05:47:51.420748 2385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:47:53.024402 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... Oct 13 05:47:53.024425 systemd[1]: Reloading... Oct 13 05:47:53.110101 zram_generator::config[2690]: No configuration found. Oct 13 05:47:53.262383 systemd[1]: Reloading finished in 237 ms. Oct 13 05:47:53.279290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:53.295963 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:47:53.296134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:53.296172 systemd[1]: kubelet.service: Consumed 648ms CPU time, 125.4M memory peak. Oct 13 05:47:53.297493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:47:53.446596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:47:53.453467 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:47:53.482710 kubelet[2744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:47:53.483118 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:47:53.483118 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:47:53.483981 kubelet[2744]: I1013 05:47:53.483952 2744 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:47:53.490241 kubelet[2744]: I1013 05:47:53.490225 2744 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:47:53.490337 kubelet[2744]: I1013 05:47:53.490331 2744 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:47:53.490646 kubelet[2744]: I1013 05:47:53.490634 2744 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:47:53.491527 kubelet[2744]: I1013 05:47:53.491518 2744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 05:47:53.492955 kubelet[2744]: I1013 05:47:53.492946 2744 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:47:53.495597 kubelet[2744]: I1013 05:47:53.495549 2744 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:47:53.497825 kubelet[2744]: I1013 05:47:53.497816 2744 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:47:53.498054 kubelet[2744]: I1013 05:47:53.498028 2744 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:47:53.498296 kubelet[2744]: I1013 05:47:53.498108 2744 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-c-7af444862e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:47:53.498409 kubelet[2744]: I1013 05:47:53.498400 2744 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:47:53.498453 kubelet[2744]: I1013 05:47:53.498448 2744 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:47:53.498504 kubelet[2744]: I1013 05:47:53.498501 2744 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:47:53.498630 kubelet[2744]: I1013 05:47:53.498625 2744 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:47:53.498678 kubelet[2744]: I1013 05:47:53.498664 2744 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:47:53.499185 kubelet[2744]: I1013 05:47:53.499178 2744 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:47:53.504000 kubelet[2744]: I1013 05:47:53.503990 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:47:53.510744 kubelet[2744]: I1013 05:47:53.510728 2744 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:47:53.512089 kubelet[2744]: I1013 05:47:53.511712 2744 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:47:53.512089 kubelet[2744]: I1013 05:47:53.512055 2744 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:47:53.515043 kubelet[2744]: I1013 05:47:53.515033 2744 server.go:1287] "Started kubelet" Oct 13 05:47:53.518473 kubelet[2744]: I1013 05:47:53.518459 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:47:53.520203 kubelet[2744]: I1013 05:47:53.520185 2744 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:47:53.521352 kubelet[2744]: I1013 05:47:53.521341 2744 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:47:53.522316 kubelet[2744]: I1013 05:47:53.522281 2744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:47:53.522613 kubelet[2744]: I1013 05:47:53.522463 2744 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:47:53.522753 kubelet[2744]: I1013 05:47:53.522742 2744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:47:53.524613 kubelet[2744]: I1013 05:47:53.524339 2744 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:47:53.524842 kubelet[2744]: I1013 05:47:53.524835 2744 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:47:53.524901 kubelet[2744]: I1013 05:47:53.524897 2744 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:47:53.526730 kubelet[2744]: I1013 05:47:53.526704 2744 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:47:53.527043 kubelet[2744]: I1013 05:47:53.527031 2744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:47:53.527885 kubelet[2744]: E1013 05:47:53.527875 2744 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:47:53.528883 kubelet[2744]: I1013 05:47:53.528874 2744 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:47:53.531214 kubelet[2744]: I1013 05:47:53.531160 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:47:53.532456 kubelet[2744]: I1013 05:47:53.532446 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:47:53.532517 kubelet[2744]: I1013 05:47:53.532512 2744 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:47:53.532557 kubelet[2744]: I1013 05:47:53.532552 2744 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:47:53.532579 kubelet[2744]: I1013 05:47:53.532576 2744 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:47:53.532626 kubelet[2744]: E1013 05:47:53.532618 2744 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:47:53.574664 kubelet[2744]: I1013 05:47:53.574648 2744 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:47:53.574831 kubelet[2744]: I1013 05:47:53.574825 2744 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:47:53.574882 kubelet[2744]: I1013 05:47:53.574878 2744 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:47:53.575010 kubelet[2744]: I1013 05:47:53.575002 2744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:47:53.575061 kubelet[2744]: I1013 05:47:53.575038 2744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:47:53.575155 kubelet[2744]: I1013 05:47:53.575146 2744 policy_none.go:49] "None policy: Start" Oct 13 05:47:53.575194 kubelet[2744]: I1013 05:47:53.575189 2744 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:47:53.575238 kubelet[2744]: I1013 05:47:53.575234 2744 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:47:53.575395 kubelet[2744]: I1013 05:47:53.575383 2744 state_mem.go:75] "Updated machine memory state" Oct 13 05:47:53.579179 kubelet[2744]: I1013 05:47:53.579167 2744 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:47:53.579370 kubelet[2744]: I1013 05:47:53.579362 2744 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:47:53.579511 kubelet[2744]: I1013 05:47:53.579490 2744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:47:53.579757 kubelet[2744]: I1013 05:47:53.579748 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:47:53.582696 kubelet[2744]: E1013 05:47:53.582677 2744 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:47:53.634198 kubelet[2744]: I1013 05:47:53.634166 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.635233 kubelet[2744]: I1013 05:47:53.635222 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.635460 kubelet[2744]: I1013 05:47:53.635327 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.689110 kubelet[2744]: I1013 05:47:53.688980 2744 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.700461 kubelet[2744]: I1013 05:47:53.700409 2744 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.700611 kubelet[2744]: I1013 05:47:53.700511 2744 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.826994 kubelet[2744]: I1013 05:47:53.826841 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.826994 kubelet[2744]: I1013 05:47:53.826888 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d0045259c8c18ddb7d803b33e96c3cd-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-c-7af444862e\" (UID: \"2d0045259c8c18ddb7d803b33e96c3cd\") " pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.826994 kubelet[2744]: I1013 05:47:53.826915 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.826994 kubelet[2744]: I1013 05:47:53.826938 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.826994 kubelet[2744]: I1013 05:47:53.826960 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.827330 kubelet[2744]: I1013 05:47:53.826980 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.827330 kubelet[2744]: I1013 05:47:53.826998 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6678e7e2584f996563ab0ae51547246-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-c-7af444862e\" (UID: \"f6678e7e2584f996563ab0ae51547246\") " pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.827330 kubelet[2744]: I1013 05:47:53.827019 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:53.827330 kubelet[2744]: I1013 05:47:53.827040 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/814cc7080cab0ecf7271f88b8d6ce053-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-c-7af444862e\" (UID: \"814cc7080cab0ecf7271f88b8d6ce053\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" Oct 13 05:47:54.072669 systemd[1]: Started sshd@7-65.108.221.100:22-121.41.236.216:49978.service - OpenSSH per-connection server daemon (121.41.236.216:49978). Oct 13 05:47:54.506029 kubelet[2744]: I1013 05:47:54.505972 2744 apiserver.go:52] "Watching apiserver" Oct 13 05:47:54.525890 kubelet[2744]: I1013 05:47:54.525844 2744 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:47:54.565495 kubelet[2744]: I1013 05:47:54.564605 2744 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:54.580665 kubelet[2744]: E1013 05:47:54.580616 2744 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-c-7af444862e\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" Oct 13 05:47:54.607577 kubelet[2744]: I1013 05:47:54.607525 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-0-c-7af444862e" podStartSLOduration=1.6075117589999999 podStartE2EDuration="1.607511759s" podCreationTimestamp="2025-10-13 05:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:47:54.598913103 +0000 UTC m=+1.141816936" watchObservedRunningTime="2025-10-13 05:47:54.607511759 +0000 UTC m=+1.150415602" Oct 13 05:47:54.617541 kubelet[2744]: I1013 05:47:54.617510 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-0-c-7af444862e" podStartSLOduration=1.617499295 podStartE2EDuration="1.617499295s" podCreationTimestamp="2025-10-13 05:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:47:54.607869619 +0000 UTC m=+1.150773452" watchObservedRunningTime="2025-10-13 05:47:54.617499295 +0000 UTC m=+1.160403138" Oct 13 05:47:54.627622 kubelet[2744]: I1013 05:47:54.627540 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" podStartSLOduration=1.627526151 podStartE2EDuration="1.627526151s" podCreationTimestamp="2025-10-13 05:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:47:54.617619365 +0000 UTC m=+1.160523198" watchObservedRunningTime="2025-10-13 05:47:54.627526151 +0000 UTC m=+1.170429984" Oct 13 05:47:54.963055 sshd[2781]: Invalid user from 121.41.236.216 port 49978 Oct 13 05:47:57.647936 kubelet[2744]: I1013 05:47:57.647836 2744 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:47:57.648797 containerd[1605]: time="2025-10-13T05:47:57.648311072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:47:57.649281 kubelet[2744]: I1013 05:47:57.649229 2744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:47:58.280576 systemd[1]: Created slice kubepods-besteffort-pod487107ab_bf60_4020_96d1_8eb86052c28f.slice - libcontainer container kubepods-besteffort-pod487107ab_bf60_4020_96d1_8eb86052c28f.slice. Oct 13 05:47:58.356153 kubelet[2744]: I1013 05:47:58.356038 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/487107ab-bf60-4020-96d1-8eb86052c28f-kube-proxy\") pod \"kube-proxy-skqms\" (UID: \"487107ab-bf60-4020-96d1-8eb86052c28f\") " pod="kube-system/kube-proxy-skqms" Oct 13 05:47:58.356153 kubelet[2744]: I1013 05:47:58.356133 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/487107ab-bf60-4020-96d1-8eb86052c28f-xtables-lock\") pod \"kube-proxy-skqms\" (UID: \"487107ab-bf60-4020-96d1-8eb86052c28f\") " pod="kube-system/kube-proxy-skqms" Oct 13 05:47:58.356153 kubelet[2744]: I1013 05:47:58.356159 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/487107ab-bf60-4020-96d1-8eb86052c28f-lib-modules\") pod \"kube-proxy-skqms\" (UID: \"487107ab-bf60-4020-96d1-8eb86052c28f\") " pod="kube-system/kube-proxy-skqms" Oct 13 05:47:58.356488 kubelet[2744]: I1013 05:47:58.356182 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48jl\" (UniqueName: \"kubernetes.io/projected/487107ab-bf60-4020-96d1-8eb86052c28f-kube-api-access-l48jl\") pod \"kube-proxy-skqms\" (UID: \"487107ab-bf60-4020-96d1-8eb86052c28f\") " pod="kube-system/kube-proxy-skqms" Oct 13 05:47:58.473150 kubelet[2744]: E1013 05:47:58.473113 2744 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:47:58.473150 kubelet[2744]: E1013 05:47:58.473150 2744 projected.go:194] Error preparing data for projected volume kube-api-access-l48jl for pod kube-system/kube-proxy-skqms: configmap "kube-root-ca.crt" not found Oct 13 05:47:58.473308 kubelet[2744]: E1013 05:47:58.473218 2744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/487107ab-bf60-4020-96d1-8eb86052c28f-kube-api-access-l48jl podName:487107ab-bf60-4020-96d1-8eb86052c28f nodeName:}" failed. No retries permitted until 2025-10-13 05:47:58.973195478 +0000 UTC m=+5.516099341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l48jl" (UniqueName: "kubernetes.io/projected/487107ab-bf60-4020-96d1-8eb86052c28f-kube-api-access-l48jl") pod "kube-proxy-skqms" (UID: "487107ab-bf60-4020-96d1-8eb86052c28f") : configmap "kube-root-ca.crt" not found Oct 13 05:47:58.735153 systemd[1]: Created slice kubepods-besteffort-podbda66ed4_dddd_4f76_bc7d_65e7b18a5dd6.slice - libcontainer container kubepods-besteffort-podbda66ed4_dddd_4f76_bc7d_65e7b18a5dd6.slice. Oct 13 05:47:58.759848 kubelet[2744]: I1013 05:47:58.759796 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9mv\" (UniqueName: \"kubernetes.io/projected/bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6-kube-api-access-4t9mv\") pod \"tigera-operator-755d956888-jp2rf\" (UID: \"bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6\") " pod="tigera-operator/tigera-operator-755d956888-jp2rf" Oct 13 05:47:58.760445 kubelet[2744]: I1013 05:47:58.760349 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6-var-lib-calico\") pod \"tigera-operator-755d956888-jp2rf\" (UID: \"bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6\") " pod="tigera-operator/tigera-operator-755d956888-jp2rf" Oct 13 05:47:59.041007 containerd[1605]: time="2025-10-13T05:47:59.040872432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jp2rf,Uid:bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:47:59.062894 containerd[1605]: time="2025-10-13T05:47:59.061389733Z" level=info msg="connecting to shim 3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0" address="unix:///run/containerd/s/c1f960cdfa871ee5ddfc6018ed166fd6c3379dbe44b4b37e1b1ab899619e54fa" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:47:59.112301 systemd[1]: Started cri-containerd-3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0.scope - libcontainer container 3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0. Oct 13 05:47:59.148881 containerd[1605]: time="2025-10-13T05:47:59.148803687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jp2rf,Uid:bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0\"" Oct 13 05:47:59.150521 containerd[1605]: time="2025-10-13T05:47:59.150482856Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:47:59.195268 containerd[1605]: time="2025-10-13T05:47:59.195215007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skqms,Uid:487107ab-bf60-4020-96d1-8eb86052c28f,Namespace:kube-system,Attempt:0,}" Oct 13 05:47:59.227595 containerd[1605]: time="2025-10-13T05:47:59.227457244Z" level=info msg="connecting to shim accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8" address="unix:///run/containerd/s/f60088a025e385bd5fee1b4325a664e434fc36be91a960687c2aa1160964388d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:47:59.262329 systemd[1]: Started cri-containerd-accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8.scope - libcontainer container accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8. Oct 13 05:47:59.281658 containerd[1605]: time="2025-10-13T05:47:59.281624951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skqms,Uid:487107ab-bf60-4020-96d1-8eb86052c28f,Namespace:kube-system,Attempt:0,} returns sandbox id \"accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8\"" Oct 13 05:47:59.284178 containerd[1605]: time="2025-10-13T05:47:59.284153760Z" level=info msg="CreateContainer within sandbox \"accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:47:59.291933 containerd[1605]: time="2025-10-13T05:47:59.291854207Z" level=info msg="Container 384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:47:59.297962 containerd[1605]: time="2025-10-13T05:47:59.297924585Z" level=info msg="CreateContainer within sandbox \"accde835c3a567141c874d980f45b2cf829356f96b36626d4f4d596f1f97f3d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd\"" Oct 13 05:47:59.299497 containerd[1605]: time="2025-10-13T05:47:59.299473324Z" level=info msg="StartContainer for \"384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd\"" Oct 13 05:47:59.300452 containerd[1605]: time="2025-10-13T05:47:59.300423914Z" level=info msg="connecting to shim 384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd" address="unix:///run/containerd/s/f60088a025e385bd5fee1b4325a664e434fc36be91a960687c2aa1160964388d" protocol=ttrpc version=3 Oct 13 05:47:59.312265 systemd[1]: Started cri-containerd-384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd.scope - libcontainer container 384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd. Oct 13 05:47:59.341618 containerd[1605]: time="2025-10-13T05:47:59.341565886Z" level=info msg="StartContainer for \"384a2fa35e7020a94d2940e6c282dfd447ba7da7597ae95ae3e4895a0592c6cd\" returns successfully" Oct 13 05:48:01.934819 update_engine[1581]: I20251013 05:48:01.934720 1581 update_attempter.cc:509] Updating boot flags... Oct 13 05:48:02.054384 sshd[2781]: Connection closed by invalid user 121.41.236.216 port 49978 [preauth] Oct 13 05:48:02.055659 systemd[1]: sshd@7-65.108.221.100:22-121.41.236.216:49978.service: Deactivated successfully. Oct 13 05:48:02.661844 kubelet[2744]: I1013 05:48:02.661329 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skqms" podStartSLOduration=4.661309253 podStartE2EDuration="4.661309253s" podCreationTimestamp="2025-10-13 05:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:47:59.602951638 +0000 UTC m=+6.145855511" watchObservedRunningTime="2025-10-13 05:48:02.661309253 +0000 UTC m=+9.204213116" Oct 13 05:48:03.055147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116384082.mount: Deactivated successfully. Oct 13 05:48:03.410638 containerd[1605]: time="2025-10-13T05:48:03.410539311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:03.411395 containerd[1605]: time="2025-10-13T05:48:03.411373411Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:48:03.413565 containerd[1605]: time="2025-10-13T05:48:03.413520510Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:03.415152 containerd[1605]: time="2025-10-13T05:48:03.415131459Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:03.415852 containerd[1605]: time="2025-10-13T05:48:03.415466529Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.264963413s" Oct 13 05:48:03.415852 containerd[1605]: time="2025-10-13T05:48:03.415489679Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:48:03.417276 containerd[1605]: time="2025-10-13T05:48:03.417250188Z" level=info msg="CreateContainer within sandbox \"3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:48:03.423528 containerd[1605]: time="2025-10-13T05:48:03.423504156Z" level=info msg="Container 284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:03.445812 containerd[1605]: time="2025-10-13T05:48:03.445747776Z" level=info msg="CreateContainer within sandbox \"3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\"" Oct 13 05:48:03.446217 containerd[1605]: time="2025-10-13T05:48:03.446205456Z" level=info msg="StartContainer for \"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\"" Oct 13 05:48:03.446915 containerd[1605]: time="2025-10-13T05:48:03.446776666Z" level=info msg="connecting to shim 284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b" address="unix:///run/containerd/s/c1f960cdfa871ee5ddfc6018ed166fd6c3379dbe44b4b37e1b1ab899619e54fa" protocol=ttrpc version=3 Oct 13 05:48:03.469209 systemd[1]: Started cri-containerd-284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b.scope - libcontainer container 284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b. Oct 13 05:48:03.499050 containerd[1605]: time="2025-10-13T05:48:03.499005684Z" level=info msg="StartContainer for \"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" returns successfully" Oct 13 05:48:03.618987 kubelet[2744]: I1013 05:48:03.618455 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-jp2rf" podStartSLOduration=1.352403631 podStartE2EDuration="5.618438724s" podCreationTimestamp="2025-10-13 05:47:58 +0000 UTC" firstStartedPulling="2025-10-13 05:47:59.150113636 +0000 UTC m=+5.693017469" lastFinishedPulling="2025-10-13 05:48:03.416148719 +0000 UTC m=+9.959052562" observedRunningTime="2025-10-13 05:48:03.608184589 +0000 UTC m=+10.151088422" watchObservedRunningTime="2025-10-13 05:48:03.618438724 +0000 UTC m=+10.161342567" Oct 13 05:48:08.955616 sudo[1834]: pam_unix(sudo:session): session closed for user root Oct 13 05:48:09.119096 sshd[1833]: Connection closed by 147.75.109.163 port 54204 Oct 13 05:48:09.119611 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Oct 13 05:48:09.122243 systemd[1]: sshd@6-65.108.221.100:22-147.75.109.163:54204.service: Deactivated successfully. Oct 13 05:48:09.125308 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:48:09.125596 systemd[1]: session-7.scope: Consumed 3.540s CPU time, 158.7M memory peak. Oct 13 05:48:09.127575 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:48:09.128527 systemd-logind[1580]: Removed session 7. Oct 13 05:48:09.544835 systemd[1]: Started sshd@8-65.108.221.100:22-47.76.180.25:59972.service - OpenSSH per-connection server daemon (47.76.180.25:59972). Oct 13 05:48:09.588087 sshd[3155]: Connection closed by 47.76.180.25 port 59972 Oct 13 05:48:09.589691 systemd[1]: sshd@8-65.108.221.100:22-47.76.180.25:59972.service: Deactivated successfully. Oct 13 05:48:11.601720 systemd[1]: Created slice kubepods-besteffort-pod3f14f93a_f830_4fe3_9142_3c2271a7a9ef.slice - libcontainer container kubepods-besteffort-pod3f14f93a_f830_4fe3_9142_3c2271a7a9ef.slice. Oct 13 05:48:11.648348 kubelet[2744]: I1013 05:48:11.648285 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqw7\" (UniqueName: \"kubernetes.io/projected/3f14f93a-f830-4fe3-9142-3c2271a7a9ef-kube-api-access-8qqw7\") pod \"calico-typha-5949c844f7-qwphd\" (UID: \"3f14f93a-f830-4fe3-9142-3c2271a7a9ef\") " pod="calico-system/calico-typha-5949c844f7-qwphd" Oct 13 05:48:11.648348 kubelet[2744]: I1013 05:48:11.648334 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f14f93a-f830-4fe3-9142-3c2271a7a9ef-tigera-ca-bundle\") pod \"calico-typha-5949c844f7-qwphd\" (UID: \"3f14f93a-f830-4fe3-9142-3c2271a7a9ef\") " pod="calico-system/calico-typha-5949c844f7-qwphd" Oct 13 05:48:11.648348 kubelet[2744]: I1013 05:48:11.648350 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f14f93a-f830-4fe3-9142-3c2271a7a9ef-typha-certs\") pod \"calico-typha-5949c844f7-qwphd\" (UID: \"3f14f93a-f830-4fe3-9142-3c2271a7a9ef\") " pod="calico-system/calico-typha-5949c844f7-qwphd" Oct 13 05:48:11.908386 containerd[1605]: time="2025-10-13T05:48:11.908132851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5949c844f7-qwphd,Uid:3f14f93a-f830-4fe3-9142-3c2271a7a9ef,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:11.932031 systemd[1]: Created slice kubepods-besteffort-pod5cdf7586_b49c_4630_ba96_40cb37ffe869.slice - libcontainer container kubepods-besteffort-pod5cdf7586_b49c_4630_ba96_40cb37ffe869.slice. Oct 13 05:48:11.939685 containerd[1605]: time="2025-10-13T05:48:11.939612327Z" level=info msg="connecting to shim 86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0" address="unix:///run/containerd/s/ecdd4eb29e8a8d0ac3dfe0ce1921c56329cdc685818bd053d68386c0bdebfd8c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:11.951974 kubelet[2744]: I1013 05:48:11.951937 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-cni-net-dir\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.951974 kubelet[2744]: I1013 05:48:11.951969 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-policysync\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.951974 kubelet[2744]: I1013 05:48:11.951979 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-var-lib-calico\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952288 kubelet[2744]: I1013 05:48:11.951991 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-var-run-calico\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952288 kubelet[2744]: I1013 05:48:11.952001 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5cdf7586-b49c-4630-ba96-40cb37ffe869-node-certs\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952288 kubelet[2744]: I1013 05:48:11.952034 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cdf7586-b49c-4630-ba96-40cb37ffe869-tigera-ca-bundle\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952288 kubelet[2744]: I1013 05:48:11.952043 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-xtables-lock\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952288 kubelet[2744]: I1013 05:48:11.952060 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-lib-modules\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952413 kubelet[2744]: I1013 05:48:11.952094 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msrl\" (UniqueName: \"kubernetes.io/projected/5cdf7586-b49c-4630-ba96-40cb37ffe869-kube-api-access-9msrl\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952413 kubelet[2744]: I1013 05:48:11.952119 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-flexvol-driver-host\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952627 kubelet[2744]: I1013 05:48:11.952514 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-cni-bin-dir\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.952627 kubelet[2744]: I1013 05:48:11.952561 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5cdf7586-b49c-4630-ba96-40cb37ffe869-cni-log-dir\") pod \"calico-node-dvwd7\" (UID: \"5cdf7586-b49c-4630-ba96-40cb37ffe869\") " pod="calico-system/calico-node-dvwd7" Oct 13 05:48:11.969270 systemd[1]: Started cri-containerd-86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0.scope - libcontainer container 86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0. Oct 13 05:48:12.016600 containerd[1605]: time="2025-10-13T05:48:12.016567869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5949c844f7-qwphd,Uid:3f14f93a-f830-4fe3-9142-3c2271a7a9ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0\"" Oct 13 05:48:12.019201 containerd[1605]: time="2025-10-13T05:48:12.019176830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:48:12.064646 kubelet[2744]: E1013 05:48:12.064560 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.064646 kubelet[2744]: W1013 05:48:12.064585 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.065281 kubelet[2744]: E1013 05:48:12.065244 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.206339 kubelet[2744]: E1013 05:48:12.205351 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:12.235433 containerd[1605]: time="2025-10-13T05:48:12.235405464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dvwd7,Uid:5cdf7586-b49c-4630-ba96-40cb37ffe869,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:12.237965 kubelet[2744]: E1013 05:48:12.237899 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.237965 kubelet[2744]: W1013 05:48:12.237916 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.237965 kubelet[2744]: E1013 05:48:12.237934 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.238315 kubelet[2744]: E1013 05:48:12.238198 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.238315 kubelet[2744]: W1013 05:48:12.238206 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.238315 kubelet[2744]: E1013 05:48:12.238214 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.238597 kubelet[2744]: E1013 05:48:12.238435 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.238597 kubelet[2744]: W1013 05:48:12.238447 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.238597 kubelet[2744]: E1013 05:48:12.238454 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.239017 kubelet[2744]: E1013 05:48:12.238984 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.239017 kubelet[2744]: W1013 05:48:12.238994 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.239017 kubelet[2744]: E1013 05:48:12.239001 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.239715 kubelet[2744]: E1013 05:48:12.239696 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.239715 kubelet[2744]: W1013 05:48:12.239708 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.239715 kubelet[2744]: E1013 05:48:12.239717 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.240303 kubelet[2744]: E1013 05:48:12.240278 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.240303 kubelet[2744]: W1013 05:48:12.240300 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.240365 kubelet[2744]: E1013 05:48:12.240309 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.240697 kubelet[2744]: E1013 05:48:12.240649 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.240697 kubelet[2744]: W1013 05:48:12.240667 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.240755 kubelet[2744]: E1013 05:48:12.240746 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.241591 kubelet[2744]: E1013 05:48:12.241557 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.241591 kubelet[2744]: W1013 05:48:12.241570 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.241591 kubelet[2744]: E1013 05:48:12.241581 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242042 kubelet[2744]: E1013 05:48:12.242012 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242042 kubelet[2744]: W1013 05:48:12.242026 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.242042 kubelet[2744]: E1013 05:48:12.242037 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242308 kubelet[2744]: E1013 05:48:12.242242 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242308 kubelet[2744]: W1013 05:48:12.242248 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.242308 kubelet[2744]: E1013 05:48:12.242255 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242406 kubelet[2744]: E1013 05:48:12.242388 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242406 kubelet[2744]: W1013 05:48:12.242398 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.242406 kubelet[2744]: E1013 05:48:12.242404 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242597 kubelet[2744]: E1013 05:48:12.242576 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242597 kubelet[2744]: W1013 05:48:12.242591 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.242597 kubelet[2744]: E1013 05:48:12.242597 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242784 kubelet[2744]: E1013 05:48:12.242771 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242784 kubelet[2744]: W1013 05:48:12.242780 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.242863 kubelet[2744]: E1013 05:48:12.242787 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.242990 kubelet[2744]: E1013 05:48:12.242973 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.242990 kubelet[2744]: W1013 05:48:12.242991 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.243047 kubelet[2744]: E1013 05:48:12.242998 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.243159 kubelet[2744]: E1013 05:48:12.243102 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.243159 kubelet[2744]: W1013 05:48:12.243107 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.243159 kubelet[2744]: E1013 05:48:12.243112 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.243390 kubelet[2744]: E1013 05:48:12.243362 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.243390 kubelet[2744]: W1013 05:48:12.243387 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.243430 kubelet[2744]: E1013 05:48:12.243395 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.243555 kubelet[2744]: E1013 05:48:12.243519 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.243555 kubelet[2744]: W1013 05:48:12.243548 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.243555 kubelet[2744]: E1013 05:48:12.243554 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.243698 kubelet[2744]: E1013 05:48:12.243681 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.243698 kubelet[2744]: W1013 05:48:12.243692 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.243730 kubelet[2744]: E1013 05:48:12.243699 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.243861 kubelet[2744]: E1013 05:48:12.243850 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.243861 kubelet[2744]: W1013 05:48:12.243859 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.244030 kubelet[2744]: E1013 05:48:12.243864 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.244030 kubelet[2744]: E1013 05:48:12.243960 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.244030 kubelet[2744]: W1013 05:48:12.243965 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.244030 kubelet[2744]: E1013 05:48:12.243988 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.255812 kubelet[2744]: E1013 05:48:12.255655 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.255812 kubelet[2744]: W1013 05:48:12.255678 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.255812 kubelet[2744]: E1013 05:48:12.255699 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.255812 kubelet[2744]: I1013 05:48:12.255730 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49cbfab4-e583-4ac8-8a3f-5886ff5b0027-registration-dir\") pod \"csi-node-driver-v722c\" (UID: \"49cbfab4-e583-4ac8-8a3f-5886ff5b0027\") " pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:12.256001 kubelet[2744]: E1013 05:48:12.255992 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.256106 kubelet[2744]: W1013 05:48:12.256027 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.256106 kubelet[2744]: E1013 05:48:12.256047 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.256106 kubelet[2744]: I1013 05:48:12.256065 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nd4f\" (UniqueName: \"kubernetes.io/projected/49cbfab4-e583-4ac8-8a3f-5886ff5b0027-kube-api-access-5nd4f\") pod \"csi-node-driver-v722c\" (UID: \"49cbfab4-e583-4ac8-8a3f-5886ff5b0027\") " pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:12.256271 kubelet[2744]: E1013 05:48:12.256224 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.256271 kubelet[2744]: W1013 05:48:12.256242 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.256358 kubelet[2744]: E1013 05:48:12.256283 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.256400 kubelet[2744]: E1013 05:48:12.256387 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.256400 kubelet[2744]: W1013 05:48:12.256397 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.256515 kubelet[2744]: E1013 05:48:12.256433 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.257042 kubelet[2744]: E1013 05:48:12.256953 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.257042 kubelet[2744]: W1013 05:48:12.256967 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.257042 kubelet[2744]: E1013 05:48:12.256986 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.257042 kubelet[2744]: I1013 05:48:12.256999 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49cbfab4-e583-4ac8-8a3f-5886ff5b0027-socket-dir\") pod \"csi-node-driver-v722c\" (UID: \"49cbfab4-e583-4ac8-8a3f-5886ff5b0027\") " pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:12.257158 kubelet[2744]: E1013 05:48:12.257107 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.257158 kubelet[2744]: W1013 05:48:12.257114 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.257158 kubelet[2744]: E1013 05:48:12.257129 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.257241 kubelet[2744]: E1013 05:48:12.257227 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.257241 kubelet[2744]: W1013 05:48:12.257237 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.257272 kubelet[2744]: E1013 05:48:12.257252 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257374 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.258091 kubelet[2744]: W1013 05:48:12.257382 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257391 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.258091 kubelet[2744]: I1013 05:48:12.257443 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/49cbfab4-e583-4ac8-8a3f-5886ff5b0027-varrun\") pod \"csi-node-driver-v722c\" (UID: \"49cbfab4-e583-4ac8-8a3f-5886ff5b0027\") " pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257564 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.258091 kubelet[2744]: W1013 05:48:12.257584 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257598 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257686 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.258091 kubelet[2744]: W1013 05:48:12.257691 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.258091 kubelet[2744]: E1013 05:48:12.257830 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.258262 kubelet[2744]: W1013 05:48:12.257836 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.258262 kubelet[2744]: E1013 05:48:12.257841 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.258262 kubelet[2744]: I1013 05:48:12.257852 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49cbfab4-e583-4ac8-8a3f-5886ff5b0027-kubelet-dir\") pod \"csi-node-driver-v722c\" (UID: \"49cbfab4-e583-4ac8-8a3f-5886ff5b0027\") " pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:12.258262 kubelet[2744]: E1013 05:48:12.257913 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.258262 kubelet[2744]: W1013 05:48:12.257917 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.258262 kubelet[2744]: E1013 05:48:12.257921 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.258761 containerd[1605]: time="2025-10-13T05:48:12.258726419Z" level=info msg="connecting to shim 03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb" address="unix:///run/containerd/s/913149de4effe95e77bb57e4c062764e30dcc12a346dd96404656f0dbb43bd26" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:12.260321 kubelet[2744]: E1013 05:48:12.260299 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.260467 kubelet[2744]: E1013 05:48:12.260444 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.260467 kubelet[2744]: W1013 05:48:12.260456 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.260524 kubelet[2744]: E1013 05:48:12.260470 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.260572 kubelet[2744]: E1013 05:48:12.260554 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.260572 kubelet[2744]: W1013 05:48:12.260564 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.260572 kubelet[2744]: E1013 05:48:12.260569 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.260680 kubelet[2744]: E1013 05:48:12.260667 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.260680 kubelet[2744]: W1013 05:48:12.260676 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.260712 kubelet[2744]: E1013 05:48:12.260682 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.276218 systemd[1]: Started cri-containerd-03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb.scope - libcontainer container 03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb. Oct 13 05:48:12.311418 containerd[1605]: time="2025-10-13T05:48:12.311233061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dvwd7,Uid:5cdf7586-b49c-4630-ba96-40cb37ffe869,Namespace:calico-system,Attempt:0,} returns sandbox id \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\"" Oct 13 05:48:12.361408 kubelet[2744]: E1013 05:48:12.361374 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.361408 kubelet[2744]: W1013 05:48:12.361391 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.361408 kubelet[2744]: E1013 05:48:12.361407 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.361605 kubelet[2744]: E1013 05:48:12.361587 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.361605 kubelet[2744]: W1013 05:48:12.361598 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.361727 kubelet[2744]: E1013 05:48:12.361709 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.361858 kubelet[2744]: E1013 05:48:12.361836 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.361858 kubelet[2744]: W1013 05:48:12.361845 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.361858 kubelet[2744]: E1013 05:48:12.361855 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.361968 kubelet[2744]: E1013 05:48:12.361952 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.361968 kubelet[2744]: W1013 05:48:12.361960 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.361968 kubelet[2744]: E1013 05:48:12.361966 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.362183 kubelet[2744]: E1013 05:48:12.362144 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.362183 kubelet[2744]: W1013 05:48:12.362165 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.362308 kubelet[2744]: E1013 05:48:12.362190 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.362388 kubelet[2744]: E1013 05:48:12.362371 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.362388 kubelet[2744]: W1013 05:48:12.362380 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.362388 kubelet[2744]: E1013 05:48:12.362386 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.362535 kubelet[2744]: E1013 05:48:12.362509 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.362535 kubelet[2744]: W1013 05:48:12.362517 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.362535 kubelet[2744]: E1013 05:48:12.362533 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.362702 kubelet[2744]: E1013 05:48:12.362688 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.362702 kubelet[2744]: W1013 05:48:12.362698 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.362742 kubelet[2744]: E1013 05:48:12.362705 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.362821 kubelet[2744]: E1013 05:48:12.362809 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.362821 kubelet[2744]: W1013 05:48:12.362818 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.362905 kubelet[2744]: E1013 05:48:12.362831 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363096 kubelet[2744]: E1013 05:48:12.363052 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363096 kubelet[2744]: W1013 05:48:12.363064 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.363178 kubelet[2744]: E1013 05:48:12.363090 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363293 kubelet[2744]: E1013 05:48:12.363276 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363293 kubelet[2744]: W1013 05:48:12.363285 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.363329 kubelet[2744]: E1013 05:48:12.363298 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363436 kubelet[2744]: E1013 05:48:12.363418 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363436 kubelet[2744]: W1013 05:48:12.363428 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.363535 kubelet[2744]: E1013 05:48:12.363501 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363586 kubelet[2744]: E1013 05:48:12.363570 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363586 kubelet[2744]: W1013 05:48:12.363581 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.363619 kubelet[2744]: E1013 05:48:12.363589 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363755 kubelet[2744]: E1013 05:48:12.363736 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363755 kubelet[2744]: W1013 05:48:12.363748 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.363790 kubelet[2744]: E1013 05:48:12.363759 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.363914 kubelet[2744]: E1013 05:48:12.363898 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.363914 kubelet[2744]: W1013 05:48:12.363907 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.364034 kubelet[2744]: E1013 05:48:12.364006 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.364059 kubelet[2744]: E1013 05:48:12.364053 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.364059 kubelet[2744]: W1013 05:48:12.364057 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.364194 kubelet[2744]: E1013 05:48:12.364156 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.364238 kubelet[2744]: E1013 05:48:12.364222 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.364238 kubelet[2744]: W1013 05:48:12.364231 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.364282 kubelet[2744]: E1013 05:48:12.364268 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.364419 kubelet[2744]: E1013 05:48:12.364399 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.364438 kubelet[2744]: W1013 05:48:12.364411 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.364456 kubelet[2744]: E1013 05:48:12.364443 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.364725 kubelet[2744]: E1013 05:48:12.364708 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.364725 kubelet[2744]: W1013 05:48:12.364718 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.364761 kubelet[2744]: E1013 05:48:12.364734 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.365110 kubelet[2744]: E1013 05:48:12.365092 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.365110 kubelet[2744]: W1013 05:48:12.365104 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.365176 kubelet[2744]: E1013 05:48:12.365127 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.365283 kubelet[2744]: E1013 05:48:12.365256 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.365283 kubelet[2744]: W1013 05:48:12.365267 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.365283 kubelet[2744]: E1013 05:48:12.365278 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.365475 kubelet[2744]: E1013 05:48:12.365458 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.365475 kubelet[2744]: W1013 05:48:12.365469 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.365475 kubelet[2744]: E1013 05:48:12.365475 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.365664 kubelet[2744]: E1013 05:48:12.365566 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.365664 kubelet[2744]: W1013 05:48:12.365573 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.365664 kubelet[2744]: E1013 05:48:12.365579 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.365713 kubelet[2744]: E1013 05:48:12.365694 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.365713 kubelet[2744]: W1013 05:48:12.365699 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.365713 kubelet[2744]: E1013 05:48:12.365705 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.366045 kubelet[2744]: E1013 05:48:12.366026 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.366045 kubelet[2744]: W1013 05:48:12.366038 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.366045 kubelet[2744]: E1013 05:48:12.366044 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:12.376426 kubelet[2744]: E1013 05:48:12.376365 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:12.376426 kubelet[2744]: W1013 05:48:12.376381 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:12.376426 kubelet[2744]: E1013 05:48:12.376395 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:13.534867 kubelet[2744]: E1013 05:48:13.533873 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:13.810896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058767820.mount: Deactivated successfully. Oct 13 05:48:14.193661 containerd[1605]: time="2025-10-13T05:48:14.193496891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:14.194426 containerd[1605]: time="2025-10-13T05:48:14.194401576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:48:14.195442 containerd[1605]: time="2025-10-13T05:48:14.195404819Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:14.200922 containerd[1605]: time="2025-10-13T05:48:14.200884760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:14.201263 containerd[1605]: time="2025-10-13T05:48:14.201139596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.181936406s" Oct 13 05:48:14.201263 containerd[1605]: time="2025-10-13T05:48:14.201164695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:48:14.201997 containerd[1605]: time="2025-10-13T05:48:14.201983551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:48:14.214376 containerd[1605]: time="2025-10-13T05:48:14.214327039Z" level=info msg="CreateContainer within sandbox \"86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:48:14.221321 containerd[1605]: time="2025-10-13T05:48:14.221241565Z" level=info msg="Container 25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:14.230874 containerd[1605]: time="2025-10-13T05:48:14.230839107Z" level=info msg="CreateContainer within sandbox \"86038b993e0202f511a437091fd7e40cb47eefc38679bb774dab931f3b32c0e0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be\"" Oct 13 05:48:14.231373 containerd[1605]: time="2025-10-13T05:48:14.231349459Z" level=info msg="StartContainer for \"25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be\"" Oct 13 05:48:14.232031 containerd[1605]: time="2025-10-13T05:48:14.232010609Z" level=info msg="connecting to shim 25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be" address="unix:///run/containerd/s/ecdd4eb29e8a8d0ac3dfe0ce1921c56329cdc685818bd053d68386c0bdebfd8c" protocol=ttrpc version=3 Oct 13 05:48:14.255341 systemd[1]: Started cri-containerd-25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be.scope - libcontainer container 25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be. Oct 13 05:48:14.297412 containerd[1605]: time="2025-10-13T05:48:14.297379686Z" level=info msg="StartContainer for \"25746f5b9b85cff71d9c37e8a2d1a709f5cc05c360c11c4edba5d42cc1f935be\" returns successfully" Oct 13 05:48:14.633571 kubelet[2744]: I1013 05:48:14.633438 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5949c844f7-qwphd" podStartSLOduration=1.449314013 podStartE2EDuration="3.633423869s" podCreationTimestamp="2025-10-13 05:48:11 +0000 UTC" firstStartedPulling="2025-10-13 05:48:12.017771067 +0000 UTC m=+18.560674910" lastFinishedPulling="2025-10-13 05:48:14.201880923 +0000 UTC m=+20.744784766" observedRunningTime="2025-10-13 05:48:14.632419695 +0000 UTC m=+21.175323538" watchObservedRunningTime="2025-10-13 05:48:14.633423869 +0000 UTC m=+21.176327722" Oct 13 05:48:14.659999 kubelet[2744]: E1013 05:48:14.659951 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.659999 kubelet[2744]: W1013 05:48:14.659982 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.659999 kubelet[2744]: E1013 05:48:14.660004 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660198 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.660933 kubelet[2744]: W1013 05:48:14.660206 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660214 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660414 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.660933 kubelet[2744]: W1013 05:48:14.660422 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660431 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660566 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.660933 kubelet[2744]: W1013 05:48:14.660572 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660577 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.660933 kubelet[2744]: E1013 05:48:14.660732 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.661211 kubelet[2744]: W1013 05:48:14.660740 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.661211 kubelet[2744]: E1013 05:48:14.660748 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.661211 kubelet[2744]: E1013 05:48:14.660921 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.661211 kubelet[2744]: W1013 05:48:14.660928 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.661211 kubelet[2744]: E1013 05:48:14.660934 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.661211 kubelet[2744]: E1013 05:48:14.661026 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.661211 kubelet[2744]: W1013 05:48:14.661030 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.661211 kubelet[2744]: E1013 05:48:14.661034 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.661722 kubelet[2744]: E1013 05:48:14.661678 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.661722 kubelet[2744]: W1013 05:48:14.661689 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.661722 kubelet[2744]: E1013 05:48:14.661698 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.661877 kubelet[2744]: E1013 05:48:14.661805 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.661877 kubelet[2744]: W1013 05:48:14.661809 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.661877 kubelet[2744]: E1013 05:48:14.661816 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662237 kubelet[2744]: E1013 05:48:14.661957 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662237 kubelet[2744]: W1013 05:48:14.661965 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662237 kubelet[2744]: E1013 05:48:14.661972 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662237 kubelet[2744]: E1013 05:48:14.662167 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662237 kubelet[2744]: W1013 05:48:14.662175 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662237 kubelet[2744]: E1013 05:48:14.662198 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662340 kubelet[2744]: E1013 05:48:14.662315 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662340 kubelet[2744]: W1013 05:48:14.662329 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662340 kubelet[2744]: E1013 05:48:14.662336 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662547 kubelet[2744]: E1013 05:48:14.662519 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662547 kubelet[2744]: W1013 05:48:14.662529 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662547 kubelet[2744]: E1013 05:48:14.662538 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662724 kubelet[2744]: E1013 05:48:14.662647 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662724 kubelet[2744]: W1013 05:48:14.662653 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662724 kubelet[2744]: E1013 05:48:14.662658 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.662847 kubelet[2744]: E1013 05:48:14.662799 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.662847 kubelet[2744]: W1013 05:48:14.662809 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.662847 kubelet[2744]: E1013 05:48:14.662815 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.691755 kubelet[2744]: E1013 05:48:14.691725 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.691755 kubelet[2744]: W1013 05:48:14.691744 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.691755 kubelet[2744]: E1013 05:48:14.691764 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692057 kubelet[2744]: E1013 05:48:14.691894 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692057 kubelet[2744]: W1013 05:48:14.691898 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692057 kubelet[2744]: E1013 05:48:14.691909 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692253 kubelet[2744]: E1013 05:48:14.692203 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692253 kubelet[2744]: W1013 05:48:14.692215 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692253 kubelet[2744]: E1013 05:48:14.692227 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692370 kubelet[2744]: E1013 05:48:14.692344 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692370 kubelet[2744]: W1013 05:48:14.692367 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692563 kubelet[2744]: E1013 05:48:14.692377 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692563 kubelet[2744]: E1013 05:48:14.692465 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692563 kubelet[2744]: W1013 05:48:14.692469 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692563 kubelet[2744]: E1013 05:48:14.692480 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692643 kubelet[2744]: E1013 05:48:14.692574 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692643 kubelet[2744]: W1013 05:48:14.692578 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692643 kubelet[2744]: E1013 05:48:14.692586 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692721 kubelet[2744]: E1013 05:48:14.692710 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692721 kubelet[2744]: W1013 05:48:14.692718 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692752 kubelet[2744]: E1013 05:48:14.692727 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692823 kubelet[2744]: E1013 05:48:14.692813 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692823 kubelet[2744]: W1013 05:48:14.692821 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692859 kubelet[2744]: E1013 05:48:14.692827 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.692965 kubelet[2744]: E1013 05:48:14.692906 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.692965 kubelet[2744]: W1013 05:48:14.692914 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.692965 kubelet[2744]: E1013 05:48:14.692919 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.693107 kubelet[2744]: E1013 05:48:14.693081 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.693107 kubelet[2744]: W1013 05:48:14.693098 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.693203 kubelet[2744]: E1013 05:48:14.693176 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.693309 kubelet[2744]: E1013 05:48:14.693299 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.693309 kubelet[2744]: W1013 05:48:14.693304 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.693421 kubelet[2744]: E1013 05:48:14.693379 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.693639 kubelet[2744]: E1013 05:48:14.693613 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.693639 kubelet[2744]: W1013 05:48:14.693621 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.693680 kubelet[2744]: E1013 05:48:14.693648 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.693807 kubelet[2744]: E1013 05:48:14.693795 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.693807 kubelet[2744]: W1013 05:48:14.693803 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.693842 kubelet[2744]: E1013 05:48:14.693809 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.693968 kubelet[2744]: E1013 05:48:14.693956 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.693968 kubelet[2744]: W1013 05:48:14.693964 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.694030 kubelet[2744]: E1013 05:48:14.693972 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.694084 kubelet[2744]: E1013 05:48:14.694036 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.694084 kubelet[2744]: W1013 05:48:14.694040 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.694084 kubelet[2744]: E1013 05:48:14.694044 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.694175 kubelet[2744]: E1013 05:48:14.694153 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.694175 kubelet[2744]: W1013 05:48:14.694157 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.694175 kubelet[2744]: E1013 05:48:14.694165 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.694396 kubelet[2744]: E1013 05:48:14.694327 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.694396 kubelet[2744]: W1013 05:48:14.694336 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.694396 kubelet[2744]: E1013 05:48:14.694349 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:14.694492 kubelet[2744]: E1013 05:48:14.694479 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:14.694492 kubelet[2744]: W1013 05:48:14.694487 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:14.694492 kubelet[2744]: E1013 05:48:14.694493 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.534116 kubelet[2744]: E1013 05:48:15.533532 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:15.623202 kubelet[2744]: I1013 05:48:15.623171 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:48:15.669950 kubelet[2744]: E1013 05:48:15.669710 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.669950 kubelet[2744]: W1013 05:48:15.669753 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.669950 kubelet[2744]: E1013 05:48:15.669770 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.669950 kubelet[2744]: E1013 05:48:15.669879 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.669950 kubelet[2744]: W1013 05:48:15.669884 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.669950 kubelet[2744]: E1013 05:48:15.669889 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.670552 kubelet[2744]: E1013 05:48:15.670048 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.670552 kubelet[2744]: W1013 05:48:15.670055 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.670552 kubelet[2744]: E1013 05:48:15.670062 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.670552 kubelet[2744]: E1013 05:48:15.670397 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.670552 kubelet[2744]: W1013 05:48:15.670419 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.670552 kubelet[2744]: E1013 05:48:15.670426 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.670695 kubelet[2744]: E1013 05:48:15.670682 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.670695 kubelet[2744]: W1013 05:48:15.670691 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.670756 kubelet[2744]: E1013 05:48:15.670696 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.670797 kubelet[2744]: E1013 05:48:15.670790 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.670813 kubelet[2744]: W1013 05:48:15.670796 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.670813 kubelet[2744]: E1013 05:48:15.670801 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.670938 kubelet[2744]: E1013 05:48:15.670926 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.670938 kubelet[2744]: W1013 05:48:15.670934 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.670977 kubelet[2744]: E1013 05:48:15.670939 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671065 kubelet[2744]: E1013 05:48:15.671050 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671065 kubelet[2744]: W1013 05:48:15.671060 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671188 kubelet[2744]: E1013 05:48:15.671067 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671216 kubelet[2744]: E1013 05:48:15.671208 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671216 kubelet[2744]: W1013 05:48:15.671213 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671255 kubelet[2744]: E1013 05:48:15.671218 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671307 kubelet[2744]: E1013 05:48:15.671302 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671322 kubelet[2744]: W1013 05:48:15.671306 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671336 kubelet[2744]: E1013 05:48:15.671323 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671452 kubelet[2744]: E1013 05:48:15.671444 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671533 kubelet[2744]: W1013 05:48:15.671485 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671533 kubelet[2744]: E1013 05:48:15.671504 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671796 kubelet[2744]: E1013 05:48:15.671757 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671796 kubelet[2744]: W1013 05:48:15.671764 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671796 kubelet[2744]: E1013 05:48:15.671771 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.671960 kubelet[2744]: E1013 05:48:15.671914 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.671960 kubelet[2744]: W1013 05:48:15.671920 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.671960 kubelet[2744]: E1013 05:48:15.671926 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.672100 kubelet[2744]: E1013 05:48:15.672056 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.672100 kubelet[2744]: W1013 05:48:15.672062 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.672172 kubelet[2744]: E1013 05:48:15.672143 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.672333 kubelet[2744]: E1013 05:48:15.672293 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.672333 kubelet[2744]: W1013 05:48:15.672300 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.672333 kubelet[2744]: E1013 05:48:15.672306 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.699721 kubelet[2744]: E1013 05:48:15.699698 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.699721 kubelet[2744]: W1013 05:48:15.699713 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.699894 kubelet[2744]: E1013 05:48:15.699727 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.699894 kubelet[2744]: E1013 05:48:15.699862 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.699894 kubelet[2744]: W1013 05:48:15.699870 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.699894 kubelet[2744]: E1013 05:48:15.699877 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700063 kubelet[2744]: E1013 05:48:15.700044 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700063 kubelet[2744]: W1013 05:48:15.700054 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700213 kubelet[2744]: E1013 05:48:15.700194 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700213 kubelet[2744]: W1013 05:48:15.700207 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700283 kubelet[2744]: E1013 05:48:15.700216 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700333 kubelet[2744]: E1013 05:48:15.700316 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700402 kubelet[2744]: E1013 05:48:15.700367 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700402 kubelet[2744]: W1013 05:48:15.700378 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700402 kubelet[2744]: E1013 05:48:15.700395 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700550 kubelet[2744]: E1013 05:48:15.700533 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700550 kubelet[2744]: W1013 05:48:15.700544 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700633 kubelet[2744]: E1013 05:48:15.700565 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700680 kubelet[2744]: E1013 05:48:15.700674 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700716 kubelet[2744]: W1013 05:48:15.700681 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700716 kubelet[2744]: E1013 05:48:15.700694 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.700870 kubelet[2744]: E1013 05:48:15.700854 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.700870 kubelet[2744]: W1013 05:48:15.700865 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.700934 kubelet[2744]: E1013 05:48:15.700876 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.701376 kubelet[2744]: E1013 05:48:15.701347 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.701376 kubelet[2744]: W1013 05:48:15.701374 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.701476 kubelet[2744]: E1013 05:48:15.701458 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.701540 kubelet[2744]: E1013 05:48:15.701530 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.701575 kubelet[2744]: W1013 05:48:15.701539 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.701646 kubelet[2744]: E1013 05:48:15.701634 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.701703 kubelet[2744]: E1013 05:48:15.701681 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.701703 kubelet[2744]: W1013 05:48:15.701701 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.701751 kubelet[2744]: E1013 05:48:15.701711 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.701857 kubelet[2744]: E1013 05:48:15.701840 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.701857 kubelet[2744]: W1013 05:48:15.701851 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.701940 kubelet[2744]: E1013 05:48:15.701886 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702006 kubelet[2744]: E1013 05:48:15.701986 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702006 kubelet[2744]: W1013 05:48:15.702003 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.702121 kubelet[2744]: E1013 05:48:15.702016 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702183 kubelet[2744]: E1013 05:48:15.702171 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702183 kubelet[2744]: W1013 05:48:15.702181 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.702239 kubelet[2744]: E1013 05:48:15.702191 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702462 kubelet[2744]: E1013 05:48:15.702375 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702462 kubelet[2744]: W1013 05:48:15.702384 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.702462 kubelet[2744]: E1013 05:48:15.702396 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702583 kubelet[2744]: E1013 05:48:15.702564 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702583 kubelet[2744]: W1013 05:48:15.702574 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.702583 kubelet[2744]: E1013 05:48:15.702582 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702743 kubelet[2744]: E1013 05:48:15.702709 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702743 kubelet[2744]: W1013 05:48:15.702718 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.702743 kubelet[2744]: E1013 05:48:15.702726 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.702991 kubelet[2744]: E1013 05:48:15.702975 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:48:15.702991 kubelet[2744]: W1013 05:48:15.702985 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:48:15.703042 kubelet[2744]: E1013 05:48:15.702993 2744 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:48:15.905314 containerd[1605]: time="2025-10-13T05:48:15.905230983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:15.906392 containerd[1605]: time="2025-10-13T05:48:15.906286436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:48:15.907959 containerd[1605]: time="2025-10-13T05:48:15.907940241Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:15.919499 containerd[1605]: time="2025-10-13T05:48:15.919481023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:15.919953 containerd[1605]: time="2025-10-13T05:48:15.919740599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.717692979s" Oct 13 05:48:15.919953 containerd[1605]: time="2025-10-13T05:48:15.919761459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:48:15.921473 containerd[1605]: time="2025-10-13T05:48:15.921452123Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:48:15.941403 containerd[1605]: time="2025-10-13T05:48:15.941157839Z" level=info msg="Container cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:15.950022 containerd[1605]: time="2025-10-13T05:48:15.949975903Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\"" Oct 13 05:48:15.950725 containerd[1605]: time="2025-10-13T05:48:15.950623853Z" level=info msg="StartContainer for \"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\"" Oct 13 05:48:15.951818 containerd[1605]: time="2025-10-13T05:48:15.951798134Z" level=info msg="connecting to shim cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071" address="unix:///run/containerd/s/913149de4effe95e77bb57e4c062764e30dcc12a346dd96404656f0dbb43bd26" protocol=ttrpc version=3 Oct 13 05:48:15.969179 systemd[1]: Started cri-containerd-cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071.scope - libcontainer container cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071. Oct 13 05:48:15.999001 containerd[1605]: time="2025-10-13T05:48:15.998962808Z" level=info msg="StartContainer for \"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\" returns successfully" Oct 13 05:48:16.003459 systemd[1]: cri-containerd-cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071.scope: Deactivated successfully. Oct 13 05:48:16.011150 containerd[1605]: time="2025-10-13T05:48:16.011115970Z" level=info msg="received exit event container_id:\"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\" id:\"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\" pid:3469 exited_at:{seconds:1760334496 nanos:4841320}" Oct 13 05:48:16.021574 containerd[1605]: time="2025-10-13T05:48:16.021458640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\" id:\"cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071\" pid:3469 exited_at:{seconds:1760334496 nanos:4841320}" Oct 13 05:48:16.044205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbbf33cf9cdf58cd434c71ffe45139a2e1cde6f1e5921dcf6ccbc2da7bc70071-rootfs.mount: Deactivated successfully. Oct 13 05:48:16.636525 containerd[1605]: time="2025-10-13T05:48:16.636354637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:48:17.534476 kubelet[2744]: E1013 05:48:17.533730 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:19.533588 kubelet[2744]: E1013 05:48:19.533546 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:20.155802 containerd[1605]: time="2025-10-13T05:48:20.155753446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:20.157098 containerd[1605]: time="2025-10-13T05:48:20.156975062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:48:20.157777 containerd[1605]: time="2025-10-13T05:48:20.157747034Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:20.169147 containerd[1605]: time="2025-10-13T05:48:20.169128575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:20.169628 containerd[1605]: time="2025-10-13T05:48:20.169415882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.532920097s" Oct 13 05:48:20.169628 containerd[1605]: time="2025-10-13T05:48:20.169436682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:48:20.171455 containerd[1605]: time="2025-10-13T05:48:20.171433819Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:48:20.179151 containerd[1605]: time="2025-10-13T05:48:20.179120682Z" level=info msg="Container 19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:20.199931 containerd[1605]: time="2025-10-13T05:48:20.199889918Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\"" Oct 13 05:48:20.200589 containerd[1605]: time="2025-10-13T05:48:20.200534081Z" level=info msg="StartContainer for \"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\"" Oct 13 05:48:20.206881 containerd[1605]: time="2025-10-13T05:48:20.206848349Z" level=info msg="connecting to shim 19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb" address="unix:///run/containerd/s/913149de4effe95e77bb57e4c062764e30dcc12a346dd96404656f0dbb43bd26" protocol=ttrpc version=3 Oct 13 05:48:20.227199 systemd[1]: Started cri-containerd-19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb.scope - libcontainer container 19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb. Oct 13 05:48:20.255351 containerd[1605]: time="2025-10-13T05:48:20.255307983Z" level=info msg="StartContainer for \"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\" returns successfully" Oct 13 05:48:20.562652 systemd[1]: cri-containerd-19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb.scope: Deactivated successfully. Oct 13 05:48:20.563252 systemd[1]: cri-containerd-19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb.scope: Consumed 286ms CPU time, 168.6M memory peak, 11.1M read from disk, 171.3M written to disk. Oct 13 05:48:20.577102 containerd[1605]: time="2025-10-13T05:48:20.576790686Z" level=info msg="received exit event container_id:\"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\" id:\"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\" pid:3526 exited_at:{seconds:1760334500 nanos:564458535}" Oct 13 05:48:20.578628 containerd[1605]: time="2025-10-13T05:48:20.578611276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\" id:\"19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb\" pid:3526 exited_at:{seconds:1760334500 nanos:564458535}" Oct 13 05:48:20.593483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19bcbb00f21a8d5c12bbb569f6020f4c24848142fa60441fa107ba00967856fb-rootfs.mount: Deactivated successfully. Oct 13 05:48:20.645726 containerd[1605]: time="2025-10-13T05:48:20.645692439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:48:20.649933 kubelet[2744]: I1013 05:48:20.649905 2744 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:48:20.697792 systemd[1]: Created slice kubepods-burstable-pod5f0d0215_8783_49e1_90bb_729548648d44.slice - libcontainer container kubepods-burstable-pod5f0d0215_8783_49e1_90bb_729548648d44.slice. Oct 13 05:48:20.708968 kubelet[2744]: W1013 05:48:20.708927 2744 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4459-1-0-c-7af444862e" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-1-0-c-7af444862e' and this object Oct 13 05:48:20.709116 kubelet[2744]: E1013 05:48:20.708987 2744 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4459-1-0-c-7af444862e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-0-c-7af444862e' and this object" logger="UnhandledError" Oct 13 05:48:20.709116 kubelet[2744]: W1013 05:48:20.709051 2744 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4459-1-0-c-7af444862e" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-1-0-c-7af444862e' and this object Oct 13 05:48:20.709116 kubelet[2744]: E1013 05:48:20.709061 2744 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4459-1-0-c-7af444862e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-0-c-7af444862e' and this object" logger="UnhandledError" Oct 13 05:48:20.716107 systemd[1]: Created slice kubepods-burstable-podae76f9e1_2f01_4f66_93c3_1fecf29937c6.slice - libcontainer container kubepods-burstable-podae76f9e1_2f01_4f66_93c3_1fecf29937c6.slice. Oct 13 05:48:20.722114 systemd[1]: Created slice kubepods-besteffort-podde0c52e7_0c91_4560_b5b9_88c23e350fe3.slice - libcontainer container kubepods-besteffort-podde0c52e7_0c91_4560_b5b9_88c23e350fe3.slice. Oct 13 05:48:20.728694 systemd[1]: Created slice kubepods-besteffort-pod77a2a9a8_17d7_4cd5_8f84_507376b334a2.slice - libcontainer container kubepods-besteffort-pod77a2a9a8_17d7_4cd5_8f84_507376b334a2.slice. Oct 13 05:48:20.739028 kubelet[2744]: I1013 05:48:20.738569 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8gj\" (UniqueName: \"kubernetes.io/projected/91163fe1-984e-4665-afab-2c238d953fe5-kube-api-access-np8gj\") pod \"calico-kube-controllers-84877b6f9c-pmn7k\" (UID: \"91163fe1-984e-4665-afab-2c238d953fe5\") " pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" Oct 13 05:48:20.739199 systemd[1]: Created slice kubepods-besteffort-pode15a9cb7_86a1_4119_ba3d_9eb41ba0200c.slice - libcontainer container kubepods-besteffort-pode15a9cb7_86a1_4119_ba3d_9eb41ba0200c.slice. Oct 13 05:48:20.741114 kubelet[2744]: I1013 05:48:20.740957 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f0d0215-8783-49e1-90bb-729548648d44-config-volume\") pod \"coredns-668d6bf9bc-4dfnd\" (UID: \"5f0d0215-8783-49e1-90bb-729548648d44\") " pod="kube-system/coredns-668d6bf9bc-4dfnd" Oct 13 05:48:20.741114 kubelet[2744]: I1013 05:48:20.740977 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktrpd\" (UniqueName: \"kubernetes.io/projected/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-kube-api-access-ktrpd\") pod \"goldmane-54d579b49d-7bmmr\" (UID: \"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c\") " pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:20.741114 kubelet[2744]: I1013 05:48:20.740993 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae76f9e1-2f01-4f66-93c3-1fecf29937c6-config-volume\") pod \"coredns-668d6bf9bc-vvxlh\" (UID: \"ae76f9e1-2f01-4f66-93c3-1fecf29937c6\") " pod="kube-system/coredns-668d6bf9bc-vvxlh" Oct 13 05:48:20.741114 kubelet[2744]: I1013 05:48:20.741004 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-backend-key-pair\") pod \"whisker-59d7c58947-87vhx\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " pod="calico-system/whisker-59d7c58947-87vhx" Oct 13 05:48:20.741114 kubelet[2744]: I1013 05:48:20.741029 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de0c52e7-0c91-4560-b5b9-88c23e350fe3-calico-apiserver-certs\") pod \"calico-apiserver-7fbdc59896-lrn4h\" (UID: \"de0c52e7-0c91-4560-b5b9-88c23e350fe3\") " pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" Oct 13 05:48:20.741261 kubelet[2744]: I1013 05:48:20.741045 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrl2x\" (UniqueName: \"kubernetes.io/projected/de0c52e7-0c91-4560-b5b9-88c23e350fe3-kube-api-access-nrl2x\") pod \"calico-apiserver-7fbdc59896-lrn4h\" (UID: \"de0c52e7-0c91-4560-b5b9-88c23e350fe3\") " pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" Oct 13 05:48:20.741261 kubelet[2744]: I1013 05:48:20.741054 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-config\") pod \"goldmane-54d579b49d-7bmmr\" (UID: \"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c\") " pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:20.741261 kubelet[2744]: I1013 05:48:20.741084 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91163fe1-984e-4665-afab-2c238d953fe5-tigera-ca-bundle\") pod \"calico-kube-controllers-84877b6f9c-pmn7k\" (UID: \"91163fe1-984e-4665-afab-2c238d953fe5\") " pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" Oct 13 05:48:20.742239 kubelet[2744]: I1013 05:48:20.742222 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g67d\" (UniqueName: \"kubernetes.io/projected/5f0d0215-8783-49e1-90bb-729548648d44-kube-api-access-4g67d\") pod \"coredns-668d6bf9bc-4dfnd\" (UID: \"5f0d0215-8783-49e1-90bb-729548648d44\") " pod="kube-system/coredns-668d6bf9bc-4dfnd" Oct 13 05:48:20.742843 kubelet[2744]: I1013 05:48:20.742311 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-ca-bundle\") pod \"whisker-59d7c58947-87vhx\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " pod="calico-system/whisker-59d7c58947-87vhx" Oct 13 05:48:20.742843 kubelet[2744]: I1013 05:48:20.742324 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz42v\" (UniqueName: \"kubernetes.io/projected/77a2a9a8-17d7-4cd5-8f84-507376b334a2-kube-api-access-lz42v\") pod \"whisker-59d7c58947-87vhx\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " pod="calico-system/whisker-59d7c58947-87vhx" Oct 13 05:48:20.742843 kubelet[2744]: I1013 05:48:20.742342 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26d0e23b-94c9-49d5-bfd6-28db417af7cc-calico-apiserver-certs\") pod \"calico-apiserver-7fbdc59896-jhzdf\" (UID: \"26d0e23b-94c9-49d5-bfd6-28db417af7cc\") " pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" Oct 13 05:48:20.742843 kubelet[2744]: I1013 05:48:20.742377 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpmp\" (UniqueName: \"kubernetes.io/projected/26d0e23b-94c9-49d5-bfd6-28db417af7cc-kube-api-access-wvpmp\") pod \"calico-apiserver-7fbdc59896-jhzdf\" (UID: \"26d0e23b-94c9-49d5-bfd6-28db417af7cc\") " pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" Oct 13 05:48:20.742843 kubelet[2744]: I1013 05:48:20.742409 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-7bmmr\" (UID: \"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c\") " pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:20.742948 kubelet[2744]: I1013 05:48:20.742422 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-goldmane-key-pair\") pod \"goldmane-54d579b49d-7bmmr\" (UID: \"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c\") " pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:20.742948 kubelet[2744]: I1013 05:48:20.742437 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82tm\" (UniqueName: \"kubernetes.io/projected/ae76f9e1-2f01-4f66-93c3-1fecf29937c6-kube-api-access-q82tm\") pod \"coredns-668d6bf9bc-vvxlh\" (UID: \"ae76f9e1-2f01-4f66-93c3-1fecf29937c6\") " pod="kube-system/coredns-668d6bf9bc-vvxlh" Oct 13 05:48:20.746836 systemd[1]: Created slice kubepods-besteffort-pod26d0e23b_94c9_49d5_bfd6_28db417af7cc.slice - libcontainer container kubepods-besteffort-pod26d0e23b_94c9_49d5_bfd6_28db417af7cc.slice. Oct 13 05:48:20.754382 systemd[1]: Created slice kubepods-besteffort-pod91163fe1_984e_4665_afab_2c238d953fe5.slice - libcontainer container kubepods-besteffort-pod91163fe1_984e_4665_afab_2c238d953fe5.slice. Oct 13 05:48:21.010496 containerd[1605]: time="2025-10-13T05:48:21.010415270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfnd,Uid:5f0d0215-8783-49e1-90bb-729548648d44,Namespace:kube-system,Attempt:0,}" Oct 13 05:48:21.019422 containerd[1605]: time="2025-10-13T05:48:21.019350086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvxlh,Uid:ae76f9e1-2f01-4f66-93c3-1fecf29937c6,Namespace:kube-system,Attempt:0,}" Oct 13 05:48:21.025574 containerd[1605]: time="2025-10-13T05:48:21.025540780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-lrn4h,Uid:de0c52e7-0c91-4560-b5b9-88c23e350fe3,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:48:21.034844 containerd[1605]: time="2025-10-13T05:48:21.034803762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d7c58947-87vhx,Uid:77a2a9a8-17d7-4cd5-8f84-507376b334a2,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:21.052290 containerd[1605]: time="2025-10-13T05:48:21.052117698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-jhzdf,Uid:26d0e23b-94c9-49d5-bfd6-28db417af7cc,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:48:21.066281 containerd[1605]: time="2025-10-13T05:48:21.066253148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84877b6f9c-pmn7k,Uid:91163fe1-984e-4665-afab-2c238d953fe5,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:21.223126 containerd[1605]: time="2025-10-13T05:48:21.222295884Z" level=error msg="Failed to destroy network for sandbox \"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.223976 systemd[1]: run-netns-cni\x2ddf3295d2\x2d778f\x2def38\x2d9fdd\x2d3333c8eb8e76.mount: Deactivated successfully. Oct 13 05:48:21.233028 containerd[1605]: time="2025-10-13T05:48:21.231505887Z" level=error msg="Failed to destroy network for sandbox \"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.232993 systemd[1]: run-netns-cni\x2d7fcdc933\x2d0e36\x2d123c\x2ddc09\x2de6277befb124.mount: Deactivated successfully. Oct 13 05:48:21.241275 containerd[1605]: time="2025-10-13T05:48:21.239672350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfnd,Uid:5f0d0215-8783-49e1-90bb-729548648d44,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.241968 containerd[1605]: time="2025-10-13T05:48:21.241916487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvxlh,Uid:ae76f9e1-2f01-4f66-93c3-1fecf29937c6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.252186 kubelet[2744]: E1013 05:48:21.251851 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.252186 kubelet[2744]: E1013 05:48:21.251923 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vvxlh" Oct 13 05:48:21.252186 kubelet[2744]: E1013 05:48:21.251947 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vvxlh" Oct 13 05:48:21.252908 kubelet[2744]: E1013 05:48:21.251983 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vvxlh_kube-system(ae76f9e1-2f01-4f66-93c3-1fecf29937c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vvxlh_kube-system(ae76f9e1-2f01-4f66-93c3-1fecf29937c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b8101c7b19fef39c647ed14509202fa6c57311f8c643c9936dbdf07f0d0e41a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vvxlh" podUID="ae76f9e1-2f01-4f66-93c3-1fecf29937c6" Oct 13 05:48:21.252908 kubelet[2744]: E1013 05:48:21.252091 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.252908 kubelet[2744]: E1013 05:48:21.252120 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dfnd" Oct 13 05:48:21.253572 kubelet[2744]: E1013 05:48:21.252132 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4dfnd" Oct 13 05:48:21.253572 kubelet[2744]: E1013 05:48:21.252155 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4dfnd_kube-system(5f0d0215-8783-49e1-90bb-729548648d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4dfnd_kube-system(5f0d0215-8783-49e1-90bb-729548648d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b347e0eb2466df8251600e7bd20d34395f05b08b8aaa375df576425b02bdb6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4dfnd" podUID="5f0d0215-8783-49e1-90bb-729548648d44" Oct 13 05:48:21.295183 containerd[1605]: time="2025-10-13T05:48:21.294754556Z" level=error msg="Failed to destroy network for sandbox \"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.298038 systemd[1]: run-netns-cni\x2df3b29b0d\x2d3e10\x2d16cd\x2d8d92\x2de9c13408d6b6.mount: Deactivated successfully. Oct 13 05:48:21.298167 containerd[1605]: time="2025-10-13T05:48:21.298139181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-jhzdf,Uid:26d0e23b-94c9-49d5-bfd6-28db417af7cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.298509 kubelet[2744]: E1013 05:48:21.298317 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.298509 kubelet[2744]: E1013 05:48:21.298387 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" Oct 13 05:48:21.298509 kubelet[2744]: E1013 05:48:21.298402 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" Oct 13 05:48:21.298577 kubelet[2744]: E1013 05:48:21.298438 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fbdc59896-jhzdf_calico-apiserver(26d0e23b-94c9-49d5-bfd6-28db417af7cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fbdc59896-jhzdf_calico-apiserver(26d0e23b-94c9-49d5-bfd6-28db417af7cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d64e4568930a520a32e29563aa5a0082436cdc3d826b9c759ca76845578abb44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" podUID="26d0e23b-94c9-49d5-bfd6-28db417af7cc" Oct 13 05:48:21.303880 containerd[1605]: time="2025-10-13T05:48:21.302515594Z" level=error msg="Failed to destroy network for sandbox \"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.303871 systemd[1]: run-netns-cni\x2dd375fd8e\x2da2b3\x2d23cf\x2d8730\x2d547bebfa1870.mount: Deactivated successfully. Oct 13 05:48:21.307658 containerd[1605]: time="2025-10-13T05:48:21.307601811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-lrn4h,Uid:de0c52e7-0c91-4560-b5b9-88c23e350fe3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.308168 kubelet[2744]: E1013 05:48:21.307943 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.308168 kubelet[2744]: E1013 05:48:21.307986 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" Oct 13 05:48:21.308168 kubelet[2744]: E1013 05:48:21.308010 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" Oct 13 05:48:21.308969 kubelet[2744]: E1013 05:48:21.308047 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fbdc59896-lrn4h_calico-apiserver(de0c52e7-0c91-4560-b5b9-88c23e350fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fbdc59896-lrn4h_calico-apiserver(de0c52e7-0c91-4560-b5b9-88c23e350fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f181c659a4d78576e3faf00df950b734d60a802a092f25d4332cc64ef973432\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" podUID="de0c52e7-0c91-4560-b5b9-88c23e350fe3" Oct 13 05:48:21.317259 containerd[1605]: time="2025-10-13T05:48:21.317235309Z" level=error msg="Failed to destroy network for sandbox \"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.318561 containerd[1605]: time="2025-10-13T05:48:21.318540664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d7c58947-87vhx,Uid:77a2a9a8-17d7-4cd5-8f84-507376b334a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.318793 kubelet[2744]: E1013 05:48:21.318763 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.318831 kubelet[2744]: E1013 05:48:21.318807 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d7c58947-87vhx" Oct 13 05:48:21.318858 kubelet[2744]: E1013 05:48:21.318826 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d7c58947-87vhx" Oct 13 05:48:21.319084 kubelet[2744]: E1013 05:48:21.318875 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59d7c58947-87vhx_calico-system(77a2a9a8-17d7-4cd5-8f84-507376b334a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59d7c58947-87vhx_calico-system(77a2a9a8-17d7-4cd5-8f84-507376b334a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b36ee292e9d6579c633dcb4f8c9a5be320e7bbb55dca3a7faf2beb8572aff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59d7c58947-87vhx" podUID="77a2a9a8-17d7-4cd5-8f84-507376b334a2" Oct 13 05:48:21.321302 containerd[1605]: time="2025-10-13T05:48:21.321281145Z" level=error msg="Failed to destroy network for sandbox \"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.322200 containerd[1605]: time="2025-10-13T05:48:21.322176096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84877b6f9c-pmn7k,Uid:91163fe1-984e-4665-afab-2c238d953fe5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.322633 kubelet[2744]: E1013 05:48:21.322308 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.322633 kubelet[2744]: E1013 05:48:21.322357 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" Oct 13 05:48:21.322633 kubelet[2744]: E1013 05:48:21.322370 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" Oct 13 05:48:21.322986 kubelet[2744]: E1013 05:48:21.322414 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84877b6f9c-pmn7k_calico-system(91163fe1-984e-4665-afab-2c238d953fe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84877b6f9c-pmn7k_calico-system(91163fe1-984e-4665-afab-2c238d953fe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbe0872c15559b86c4ddc7738dc51465fda384ed53a11e2fba9d69dc88347f7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" podUID="91163fe1-984e-4665-afab-2c238d953fe5" Oct 13 05:48:21.545781 systemd[1]: Created slice kubepods-besteffort-pod49cbfab4_e583_4ac8_8a3f_5886ff5b0027.slice - libcontainer container kubepods-besteffort-pod49cbfab4_e583_4ac8_8a3f_5886ff5b0027.slice. Oct 13 05:48:21.551168 containerd[1605]: time="2025-10-13T05:48:21.550685313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v722c,Uid:49cbfab4-e583-4ac8-8a3f-5886ff5b0027,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:21.626751 containerd[1605]: time="2025-10-13T05:48:21.626695218Z" level=error msg="Failed to destroy network for sandbox \"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.632170 containerd[1605]: time="2025-10-13T05:48:21.632061210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v722c,Uid:49cbfab4-e583-4ac8-8a3f-5886ff5b0027,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.632601 kubelet[2744]: E1013 05:48:21.632498 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:21.632601 kubelet[2744]: E1013 05:48:21.632567 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:21.632601 kubelet[2744]: E1013 05:48:21.632583 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v722c" Oct 13 05:48:21.632763 kubelet[2744]: E1013 05:48:21.632743 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v722c_calico-system(49cbfab4-e583-4ac8-8a3f-5886ff5b0027)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v722c_calico-system(49cbfab4-e583-4ac8-8a3f-5886ff5b0027)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3aa979773b8dde313f9ba52bde5e756b072e84463a37d4820469df6bacef2c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v722c" podUID="49cbfab4-e583-4ac8-8a3f-5886ff5b0027" Oct 13 05:48:21.845730 kubelet[2744]: E1013 05:48:21.845592 2744 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Oct 13 05:48:21.846238 kubelet[2744]: E1013 05:48:21.845737 2744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-goldmane-key-pair podName:e15a9cb7-86a1-4119-ba3d-9eb41ba0200c nodeName:}" failed. No retries permitted until 2025-10-13 05:48:22.345698045 +0000 UTC m=+28.888601928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/e15a9cb7-86a1-4119-ba3d-9eb41ba0200c-goldmane-key-pair") pod "goldmane-54d579b49d-7bmmr" (UID: "e15a9cb7-86a1-4119-ba3d-9eb41ba0200c") : failed to sync secret cache: timed out waiting for the condition Oct 13 05:48:22.181270 systemd[1]: run-netns-cni\x2de58754bd\x2df24d\x2d9dc9\x2d7628\x2d9fd44b039f54.mount: Deactivated successfully. Oct 13 05:48:22.181427 systemd[1]: run-netns-cni\x2d22a6377b\x2d67fa\x2d0622\x2d0620\x2d8ae151c76b5d.mount: Deactivated successfully. Oct 13 05:48:22.546544 containerd[1605]: time="2025-10-13T05:48:22.546304706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7bmmr,Uid:e15a9cb7-86a1-4119-ba3d-9eb41ba0200c,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:22.618779 containerd[1605]: time="2025-10-13T05:48:22.616981261Z" level=error msg="Failed to destroy network for sandbox \"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:22.619026 systemd[1]: run-netns-cni\x2dad7b45b5\x2d3dab\x2df2ba\x2dd6e2\x2d75fa61aa6048.mount: Deactivated successfully. Oct 13 05:48:22.622087 containerd[1605]: time="2025-10-13T05:48:22.620836712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7bmmr,Uid:e15a9cb7-86a1-4119-ba3d-9eb41ba0200c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:22.625276 kubelet[2744]: E1013 05:48:22.625234 2744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:48:22.631855 kubelet[2744]: E1013 05:48:22.631818 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:22.635900 kubelet[2744]: E1013 05:48:22.635861 2744 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7bmmr" Oct 13 05:48:22.642124 kubelet[2744]: E1013 05:48:22.639839 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7bmmr_calico-system(e15a9cb7-86a1-4119-ba3d-9eb41ba0200c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7bmmr_calico-system(e15a9cb7-86a1-4119-ba3d-9eb41ba0200c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52ebcf3589976aafc7663bf64c8ea5cf6b69eb35b7a654d15dce04e94f8bebfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7bmmr" podUID="e15a9cb7-86a1-4119-ba3d-9eb41ba0200c" Oct 13 05:48:24.482605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496397175.mount: Deactivated successfully. Oct 13 05:48:24.563708 containerd[1605]: time="2025-10-13T05:48:24.563585071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:48:24.615594 containerd[1605]: time="2025-10-13T05:48:24.615435214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 3.969709736s" Oct 13 05:48:24.615594 containerd[1605]: time="2025-10-13T05:48:24.615468394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:48:24.615594 containerd[1605]: time="2025-10-13T05:48:24.615538833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:24.622773 containerd[1605]: time="2025-10-13T05:48:24.622755850Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:24.623465 containerd[1605]: time="2025-10-13T05:48:24.623128236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:24.642578 containerd[1605]: time="2025-10-13T05:48:24.642547055Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:48:24.680671 containerd[1605]: time="2025-10-13T05:48:24.679106203Z" level=info msg="Container b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:24.680147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895604964.mount: Deactivated successfully. Oct 13 05:48:24.695188 containerd[1605]: time="2025-10-13T05:48:24.695146882Z" level=info msg="CreateContainer within sandbox \"03ce0f9deb1ee239e075b0a790f77ccbd8c3765bcfde7be70c8ad97a8ed56aeb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\"" Oct 13 05:48:24.695812 containerd[1605]: time="2025-10-13T05:48:24.695795296Z" level=info msg="StartContainer for \"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\"" Oct 13 05:48:24.696815 containerd[1605]: time="2025-10-13T05:48:24.696791197Z" level=info msg="connecting to shim b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f" address="unix:///run/containerd/s/913149de4effe95e77bb57e4c062764e30dcc12a346dd96404656f0dbb43bd26" protocol=ttrpc version=3 Oct 13 05:48:24.753179 systemd[1]: Started cri-containerd-b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f.scope - libcontainer container b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f. Oct 13 05:48:24.822097 containerd[1605]: time="2025-10-13T05:48:24.821499378Z" level=info msg="StartContainer for \"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" returns successfully" Oct 13 05:48:24.898870 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:48:24.899812 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:48:25.176186 kubelet[2744]: I1013 05:48:25.175738 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-backend-key-pair\") pod \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " Oct 13 05:48:25.176911 kubelet[2744]: I1013 05:48:25.176537 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-ca-bundle\") pod \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " Oct 13 05:48:25.176911 kubelet[2744]: I1013 05:48:25.176685 2744 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz42v\" (UniqueName: \"kubernetes.io/projected/77a2a9a8-17d7-4cd5-8f84-507376b334a2-kube-api-access-lz42v\") pod \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\" (UID: \"77a2a9a8-17d7-4cd5-8f84-507376b334a2\") " Oct 13 05:48:25.180597 kubelet[2744]: I1013 05:48:25.180556 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "77a2a9a8-17d7-4cd5-8f84-507376b334a2" (UID: "77a2a9a8-17d7-4cd5-8f84-507376b334a2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:48:25.182587 kubelet[2744]: I1013 05:48:25.182559 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "77a2a9a8-17d7-4cd5-8f84-507376b334a2" (UID: "77a2a9a8-17d7-4cd5-8f84-507376b334a2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:48:25.182937 kubelet[2744]: I1013 05:48:25.182921 2744 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a2a9a8-17d7-4cd5-8f84-507376b334a2-kube-api-access-lz42v" (OuterVolumeSpecName: "kube-api-access-lz42v") pod "77a2a9a8-17d7-4cd5-8f84-507376b334a2" (UID: "77a2a9a8-17d7-4cd5-8f84-507376b334a2"). InnerVolumeSpecName "kube-api-access-lz42v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:48:25.277749 kubelet[2744]: I1013 05:48:25.277695 2744 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lz42v\" (UniqueName: \"kubernetes.io/projected/77a2a9a8-17d7-4cd5-8f84-507376b334a2-kube-api-access-lz42v\") on node \"ci-4459-1-0-c-7af444862e\" DevicePath \"\"" Oct 13 05:48:25.277749 kubelet[2744]: I1013 05:48:25.277731 2744 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-backend-key-pair\") on node \"ci-4459-1-0-c-7af444862e\" DevicePath \"\"" Oct 13 05:48:25.277749 kubelet[2744]: I1013 05:48:25.277740 2744 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77a2a9a8-17d7-4cd5-8f84-507376b334a2-whisker-ca-bundle\") on node \"ci-4459-1-0-c-7af444862e\" DevicePath \"\"" Oct 13 05:48:25.483804 systemd[1]: var-lib-kubelet-pods-77a2a9a8\x2d17d7\x2d4cd5\x2d8f84\x2d507376b334a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlz42v.mount: Deactivated successfully. Oct 13 05:48:25.483883 systemd[1]: var-lib-kubelet-pods-77a2a9a8\x2d17d7\x2d4cd5\x2d8f84\x2d507376b334a2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:48:25.540100 systemd[1]: Removed slice kubepods-besteffort-pod77a2a9a8_17d7_4cd5_8f84_507376b334a2.slice - libcontainer container kubepods-besteffort-pod77a2a9a8_17d7_4cd5_8f84_507376b334a2.slice. Oct 13 05:48:25.729189 kubelet[2744]: I1013 05:48:25.728824 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dvwd7" podStartSLOduration=2.415173344 podStartE2EDuration="14.728808998s" podCreationTimestamp="2025-10-13 05:48:11 +0000 UTC" firstStartedPulling="2025-10-13 05:48:12.313771024 +0000 UTC m=+18.856674867" lastFinishedPulling="2025-10-13 05:48:24.627406648 +0000 UTC m=+31.170310521" observedRunningTime="2025-10-13 05:48:25.727583789 +0000 UTC m=+32.270487622" watchObservedRunningTime="2025-10-13 05:48:25.728808998 +0000 UTC m=+32.271712841" Oct 13 05:48:25.757281 systemd[1]: Created slice kubepods-besteffort-poda46e5a9a_a067_4058_abee_c3ec5ba0da3f.slice - libcontainer container kubepods-besteffort-poda46e5a9a_a067_4058_abee_c3ec5ba0da3f.slice. Oct 13 05:48:25.881538 kubelet[2744]: I1013 05:48:25.881490 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a46e5a9a-a067-4058-abee-c3ec5ba0da3f-whisker-backend-key-pair\") pod \"whisker-6889ccf474-54lrl\" (UID: \"a46e5a9a-a067-4058-abee-c3ec5ba0da3f\") " pod="calico-system/whisker-6889ccf474-54lrl" Oct 13 05:48:25.881538 kubelet[2744]: I1013 05:48:25.881528 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdkm\" (UniqueName: \"kubernetes.io/projected/a46e5a9a-a067-4058-abee-c3ec5ba0da3f-kube-api-access-tzdkm\") pod \"whisker-6889ccf474-54lrl\" (UID: \"a46e5a9a-a067-4058-abee-c3ec5ba0da3f\") " pod="calico-system/whisker-6889ccf474-54lrl" Oct 13 05:48:25.881538 kubelet[2744]: I1013 05:48:25.881541 2744 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a46e5a9a-a067-4058-abee-c3ec5ba0da3f-whisker-ca-bundle\") pod \"whisker-6889ccf474-54lrl\" (UID: \"a46e5a9a-a067-4058-abee-c3ec5ba0da3f\") " pod="calico-system/whisker-6889ccf474-54lrl" Oct 13 05:48:26.060764 containerd[1605]: time="2025-10-13T05:48:26.060633529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6889ccf474-54lrl,Uid:a46e5a9a-a067-4058-abee-c3ec5ba0da3f,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:26.313788 systemd-networkd[1480]: cali6e714f5ca04: Link UP Oct 13 05:48:26.315060 systemd-networkd[1480]: cali6e714f5ca04: Gained carrier Oct 13 05:48:26.328711 containerd[1605]: 2025-10-13 05:48:26.085 [INFO][3856] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:26.328711 containerd[1605]: 2025-10-13 05:48:26.113 [INFO][3856] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0 whisker-6889ccf474- calico-system a46e5a9a-a067-4058-abee-c3ec5ba0da3f 849 0 2025-10-13 05:48:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6889ccf474 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e whisker-6889ccf474-54lrl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6e714f5ca04 [] [] }} ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-" Oct 13 05:48:26.328711 containerd[1605]: 2025-10-13 05:48:26.113 [INFO][3856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.328711 containerd[1605]: 2025-10-13 05:48:26.243 [INFO][3868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" HandleID="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Workload="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.245 [INFO][3868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" HandleID="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Workload="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003181a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"whisker-6889ccf474-54lrl", "timestamp":"2025-10-13 05:48:26.241473191 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.245 [INFO][3868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.245 [INFO][3868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.246 [INFO][3868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.263 [INFO][3868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.273 [INFO][3868] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.278 [INFO][3868] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.279 [INFO][3868] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.328911 containerd[1605]: 2025-10-13 05:48:26.282 [INFO][3868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.282 [INFO][3868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.283 [INFO][3868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33 Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.292 [INFO][3868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.300 [INFO][3868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.1/26] block=192.168.76.0/26 handle="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.300 [INFO][3868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.1/26] handle="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.300 [INFO][3868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:26.330127 containerd[1605]: 2025-10-13 05:48:26.300 [INFO][3868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.1/26] IPv6=[] ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" HandleID="k8s-pod-network.cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Workload="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.331444 containerd[1605]: 2025-10-13 05:48:26.303 [INFO][3856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0", GenerateName:"whisker-6889ccf474-", Namespace:"calico-system", SelfLink:"", UID:"a46e5a9a-a067-4058-abee-c3ec5ba0da3f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6889ccf474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"whisker-6889ccf474-54lrl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6e714f5ca04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:26.331444 containerd[1605]: 2025-10-13 05:48:26.303 [INFO][3856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.1/32] ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.332535 containerd[1605]: 2025-10-13 05:48:26.303 [INFO][3856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e714f5ca04 ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.332535 containerd[1605]: 2025-10-13 05:48:26.315 [INFO][3856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.332912 containerd[1605]: 2025-10-13 05:48:26.316 [INFO][3856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0", GenerateName:"whisker-6889ccf474-", Namespace:"calico-system", SelfLink:"", UID:"a46e5a9a-a067-4058-abee-c3ec5ba0da3f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6889ccf474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33", Pod:"whisker-6889ccf474-54lrl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6e714f5ca04", MAC:"de:3b:f4:61:32:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:26.333066 containerd[1605]: 2025-10-13 05:48:26.323 [INFO][3856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" Namespace="calico-system" Pod="whisker-6889ccf474-54lrl" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-whisker--6889ccf474--54lrl-eth0" Oct 13 05:48:26.434897 containerd[1605]: time="2025-10-13T05:48:26.434841714Z" level=info msg="connecting to shim cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33" address="unix:///run/containerd/s/4795c57870fef608afbb0b35a5d3b6da2c0061164fc8101da322e44159dbbeae" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:26.465456 systemd[1]: Started cri-containerd-cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33.scope - libcontainer container cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33. Oct 13 05:48:26.555192 containerd[1605]: time="2025-10-13T05:48:26.555152996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6889ccf474-54lrl,Uid:a46e5a9a-a067-4058-abee-c3ec5ba0da3f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33\"" Oct 13 05:48:26.561123 containerd[1605]: time="2025-10-13T05:48:26.561102010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:48:26.832407 containerd[1605]: time="2025-10-13T05:48:26.832336377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"377e5a346d05f0faca2d6ffa9728278cea48440d5d85e914f04435bfb33fd972\" pid:4030 exit_status:1 exited_at:{seconds:1760334506 nanos:816853147}" Oct 13 05:48:27.540117 kubelet[2744]: I1013 05:48:27.539509 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a2a9a8-17d7-4cd5-8f84-507376b334a2" path="/var/lib/kubelet/pods/77a2a9a8-17d7-4cd5-8f84-507376b334a2/volumes" Oct 13 05:48:27.725318 containerd[1605]: time="2025-10-13T05:48:27.725063434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"f009aa662bb8a84ed495d8100ec3cc17d1ffdf2cccaecb6adf43f72500376892\" pid:4076 exit_status:1 exited_at:{seconds:1760334507 nanos:724805946}" Oct 13 05:48:28.204978 systemd-networkd[1480]: cali6e714f5ca04: Gained IPv6LL Oct 13 05:48:28.246609 containerd[1605]: time="2025-10-13T05:48:28.246538627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:28.247274 containerd[1605]: time="2025-10-13T05:48:28.247257722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:48:28.248516 containerd[1605]: time="2025-10-13T05:48:28.248485124Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:28.250377 containerd[1605]: time="2025-10-13T05:48:28.250346331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:28.250880 containerd[1605]: time="2025-10-13T05:48:28.250651909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.689442619s" Oct 13 05:48:28.250880 containerd[1605]: time="2025-10-13T05:48:28.250682939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:48:28.253591 containerd[1605]: time="2025-10-13T05:48:28.253546068Z" level=info msg="CreateContainer within sandbox \"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:48:28.261994 containerd[1605]: time="2025-10-13T05:48:28.260256123Z" level=info msg="Container 1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:28.264581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355561794.mount: Deactivated successfully. Oct 13 05:48:28.270740 containerd[1605]: time="2025-10-13T05:48:28.270699930Z" level=info msg="CreateContainer within sandbox \"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3\"" Oct 13 05:48:28.271175 containerd[1605]: time="2025-10-13T05:48:28.271158737Z" level=info msg="StartContainer for \"1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3\"" Oct 13 05:48:28.271961 containerd[1605]: time="2025-10-13T05:48:28.271896382Z" level=info msg="connecting to shim 1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3" address="unix:///run/containerd/s/4795c57870fef608afbb0b35a5d3b6da2c0061164fc8101da322e44159dbbeae" protocol=ttrpc version=3 Oct 13 05:48:28.288203 systemd[1]: Started cri-containerd-1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3.scope - libcontainer container 1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3. Oct 13 05:48:28.327809 containerd[1605]: time="2025-10-13T05:48:28.327772427Z" level=info msg="StartContainer for \"1209dfc302bc29872868db6c030314492ee00d032c32c0cbf9a20c76f3afa9d3\" returns successfully" Oct 13 05:48:28.329489 containerd[1605]: time="2025-10-13T05:48:28.329471875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:48:30.414761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290316568.mount: Deactivated successfully. Oct 13 05:48:30.428818 containerd[1605]: time="2025-10-13T05:48:30.428774612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:30.430046 containerd[1605]: time="2025-10-13T05:48:30.430022795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:48:30.431667 containerd[1605]: time="2025-10-13T05:48:30.431632015Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:30.433228 containerd[1605]: time="2025-10-13T05:48:30.433150616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:30.433754 containerd[1605]: time="2025-10-13T05:48:30.433647323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.104040399s" Oct 13 05:48:30.433754 containerd[1605]: time="2025-10-13T05:48:30.433676593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:48:30.435495 containerd[1605]: time="2025-10-13T05:48:30.435470351Z" level=info msg="CreateContainer within sandbox \"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:48:30.445102 containerd[1605]: time="2025-10-13T05:48:30.441809623Z" level=info msg="Container bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:30.457441 containerd[1605]: time="2025-10-13T05:48:30.457392187Z" level=info msg="CreateContainer within sandbox \"cac6a4547ae1a8cd1f7983ca1ea1213d15949fdc4a4e5e1658c08904aa40be33\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291\"" Oct 13 05:48:30.459238 containerd[1605]: time="2025-10-13T05:48:30.459215346Z" level=info msg="StartContainer for \"bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291\"" Oct 13 05:48:30.460940 containerd[1605]: time="2025-10-13T05:48:30.460544068Z" level=info msg="connecting to shim bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291" address="unix:///run/containerd/s/4795c57870fef608afbb0b35a5d3b6da2c0061164fc8101da322e44159dbbeae" protocol=ttrpc version=3 Oct 13 05:48:30.482223 systemd[1]: Started cri-containerd-bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291.scope - libcontainer container bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291. Oct 13 05:48:30.518615 containerd[1605]: time="2025-10-13T05:48:30.518585664Z" level=info msg="StartContainer for \"bb91c64b602c34c00c47353a59fee80d563714796a34e2da728062a650eed291\" returns successfully" Oct 13 05:48:30.705599 kubelet[2744]: I1013 05:48:30.704920 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6889ccf474-54lrl" podStartSLOduration=1.831422248 podStartE2EDuration="5.704906174s" podCreationTimestamp="2025-10-13 05:48:25 +0000 UTC" firstStartedPulling="2025-10-13 05:48:26.560769123 +0000 UTC m=+33.103672966" lastFinishedPulling="2025-10-13 05:48:30.434253049 +0000 UTC m=+36.977156892" observedRunningTime="2025-10-13 05:48:30.704256818 +0000 UTC m=+37.247160661" watchObservedRunningTime="2025-10-13 05:48:30.704906174 +0000 UTC m=+37.247810007" Oct 13 05:48:32.534936 containerd[1605]: time="2025-10-13T05:48:32.534680301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-jhzdf,Uid:26d0e23b-94c9-49d5-bfd6-28db417af7cc,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:48:32.534936 containerd[1605]: time="2025-10-13T05:48:32.534718861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v722c,Uid:49cbfab4-e583-4ac8-8a3f-5886ff5b0027,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:32.536229 containerd[1605]: time="2025-10-13T05:48:32.536194553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-lrn4h,Uid:de0c52e7-0c91-4560-b5b9-88c23e350fe3,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:48:32.649649 systemd-networkd[1480]: cali28af604fe26: Link UP Oct 13 05:48:32.650065 systemd-networkd[1480]: cali28af604fe26: Gained carrier Oct 13 05:48:32.671134 containerd[1605]: 2025-10-13 05:48:32.568 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:32.671134 containerd[1605]: 2025-10-13 05:48:32.579 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0 calico-apiserver-7fbdc59896- calico-apiserver 26d0e23b-94c9-49d5-bfd6-28db417af7cc 781 0 2025-10-13 05:48:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fbdc59896 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e calico-apiserver-7fbdc59896-jhzdf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali28af604fe26 [] [] }} ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-" Oct 13 05:48:32.671134 containerd[1605]: 2025-10-13 05:48:32.579 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671134 containerd[1605]: 2025-10-13 05:48:32.612 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" HandleID="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.612 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" HandleID="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-c-7af444862e", "pod":"calico-apiserver-7fbdc59896-jhzdf", "timestamp":"2025-10-13 05:48:32.612428699 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.613 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.613 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.613 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.620 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.624 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.628 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.629 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671322 containerd[1605]: 2025-10-13 05:48:32.632 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.632 [INFO][4289] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.633 [INFO][4289] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2 Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.637 [INFO][4289] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.642 [INFO][4289] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.2/26] block=192.168.76.0/26 handle="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.642 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.2/26] handle="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.642 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:32.671448 containerd[1605]: 2025-10-13 05:48:32.642 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.2/26] IPv6=[] ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" HandleID="k8s-pod-network.3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671813 containerd[1605]: 2025-10-13 05:48:32.644 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0", GenerateName:"calico-apiserver-7fbdc59896-", Namespace:"calico-apiserver", SelfLink:"", UID:"26d0e23b-94c9-49d5-bfd6-28db417af7cc", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbdc59896", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"calico-apiserver-7fbdc59896-jhzdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28af604fe26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.671856 containerd[1605]: 2025-10-13 05:48:32.644 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.2/32] ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671856 containerd[1605]: 2025-10-13 05:48:32.644 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28af604fe26 ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671856 containerd[1605]: 2025-10-13 05:48:32.650 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.671897 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0", GenerateName:"calico-apiserver-7fbdc59896-", Namespace:"calico-apiserver", SelfLink:"", UID:"26d0e23b-94c9-49d5-bfd6-28db417af7cc", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbdc59896", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2", Pod:"calico-apiserver-7fbdc59896-jhzdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28af604fe26", MAC:"ea:9e:1c:ed:74:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.671942 containerd[1605]: 2025-10-13 05:48:32.666 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-jhzdf" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--jhzdf-eth0" Oct 13 05:48:32.700240 containerd[1605]: time="2025-10-13T05:48:32.700181264Z" level=info msg="connecting to shim 3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2" address="unix:///run/containerd/s/fe2d270e693d5a9f836369948b56d3e8f707f60ff44322cb25fe7733c6ceb70a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:32.719180 systemd[1]: Started cri-containerd-3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2.scope - libcontainer container 3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2. Oct 13 05:48:32.754758 systemd-networkd[1480]: calib5a6bc33c18: Link UP Oct 13 05:48:32.755746 systemd-networkd[1480]: calib5a6bc33c18: Gained carrier Oct 13 05:48:32.765620 containerd[1605]: 2025-10-13 05:48:32.599 [INFO][4278] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:32.765620 containerd[1605]: 2025-10-13 05:48:32.610 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0 calico-apiserver-7fbdc59896- calico-apiserver de0c52e7-0c91-4560-b5b9-88c23e350fe3 787 0 2025-10-13 05:48:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fbdc59896 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e calico-apiserver-7fbdc59896-lrn4h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5a6bc33c18 [] [] }} ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-" Oct 13 05:48:32.765620 containerd[1605]: 2025-10-13 05:48:32.610 [INFO][4278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.765620 containerd[1605]: 2025-10-13 05:48:32.646 [INFO][4299] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" HandleID="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.650 [INFO][4299] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" HandleID="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-c-7af444862e", "pod":"calico-apiserver-7fbdc59896-lrn4h", "timestamp":"2025-10-13 05:48:32.643310631 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.721 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.726 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.732 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.734 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766590 containerd[1605]: 2025-10-13 05:48:32.736 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.736 [INFO][4299] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.737 [INFO][4299] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.741 [INFO][4299] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.745 [INFO][4299] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.3/26] block=192.168.76.0/26 handle="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.745 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.3/26] handle="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.745 [INFO][4299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:32.766720 containerd[1605]: 2025-10-13 05:48:32.745 [INFO][4299] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.3/26] IPv6=[] ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" HandleID="k8s-pod-network.aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.766816 containerd[1605]: 2025-10-13 05:48:32.750 [INFO][4278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0", GenerateName:"calico-apiserver-7fbdc59896-", Namespace:"calico-apiserver", SelfLink:"", UID:"de0c52e7-0c91-4560-b5b9-88c23e350fe3", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbdc59896", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"calico-apiserver-7fbdc59896-lrn4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5a6bc33c18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.766852 containerd[1605]: 2025-10-13 05:48:32.751 [INFO][4278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.3/32] ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.766852 containerd[1605]: 2025-10-13 05:48:32.751 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5a6bc33c18 ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.766852 containerd[1605]: 2025-10-13 05:48:32.755 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.766891 containerd[1605]: 2025-10-13 05:48:32.755 [INFO][4278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0", GenerateName:"calico-apiserver-7fbdc59896-", Namespace:"calico-apiserver", SelfLink:"", UID:"de0c52e7-0c91-4560-b5b9-88c23e350fe3", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fbdc59896", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e", Pod:"calico-apiserver-7fbdc59896-lrn4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5a6bc33c18", MAC:"82:a9:5a:fd:62:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.766934 containerd[1605]: 2025-10-13 05:48:32.764 [INFO][4278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" Namespace="calico-apiserver" Pod="calico-apiserver-7fbdc59896-lrn4h" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--apiserver--7fbdc59896--lrn4h-eth0" Oct 13 05:48:32.777582 containerd[1605]: time="2025-10-13T05:48:32.777520553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-jhzdf,Uid:26d0e23b-94c9-49d5-bfd6-28db417af7cc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2\"" Oct 13 05:48:32.778887 containerd[1605]: time="2025-10-13T05:48:32.778829567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:48:32.788547 containerd[1605]: time="2025-10-13T05:48:32.788468574Z" level=info msg="connecting to shim aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e" address="unix:///run/containerd/s/a7f7177e226ac79998a277c997837afe939ba1d0b46537849a9d3b18110df478" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:32.814230 systemd[1]: Started cri-containerd-aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e.scope - libcontainer container aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e. Oct 13 05:48:32.861546 containerd[1605]: time="2025-10-13T05:48:32.861498248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fbdc59896-lrn4h,Uid:de0c52e7-0c91-4560-b5b9-88c23e350fe3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e\"" Oct 13 05:48:32.867576 systemd-networkd[1480]: cali2d30c10f47c: Link UP Oct 13 05:48:32.869342 systemd-networkd[1480]: cali2d30c10f47c: Gained carrier Oct 13 05:48:32.882738 containerd[1605]: 2025-10-13 05:48:32.597 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:32.882738 containerd[1605]: 2025-10-13 05:48:32.612 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0 csi-node-driver- calico-system 49cbfab4-e583-4ac8-8a3f-5886ff5b0027 688 0 2025-10-13 05:48:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e csi-node-driver-v722c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2d30c10f47c [] [] }} ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-" Oct 13 05:48:32.882738 containerd[1605]: 2025-10-13 05:48:32.613 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.882738 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4304] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" HandleID="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Workload="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.651 [INFO][4304] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" HandleID="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Workload="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"csi-node-driver-v722c", "timestamp":"2025-10-13 05:48:32.643988738 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.653 [INFO][4304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.747 [INFO][4304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.747 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.822 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.829 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.837 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.839 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.882978 containerd[1605]: 2025-10-13 05:48:32.842 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.842 [INFO][4304] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.844 [INFO][4304] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106 Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.848 [INFO][4304] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.858 [INFO][4304] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.4/26] block=192.168.76.0/26 handle="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.859 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.4/26] handle="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.859 [INFO][4304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:32.883470 containerd[1605]: 2025-10-13 05:48:32.859 [INFO][4304] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.4/26] IPv6=[] ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" HandleID="k8s-pod-network.d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Workload="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.883563 containerd[1605]: 2025-10-13 05:48:32.862 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49cbfab4-e583-4ac8-8a3f-5886ff5b0027", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"csi-node-driver-v722c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d30c10f47c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.883604 containerd[1605]: 2025-10-13 05:48:32.862 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.4/32] ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.883604 containerd[1605]: 2025-10-13 05:48:32.862 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d30c10f47c ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.883604 containerd[1605]: 2025-10-13 05:48:32.869 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.883645 containerd[1605]: 2025-10-13 05:48:32.869 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49cbfab4-e583-4ac8-8a3f-5886ff5b0027", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106", Pod:"csi-node-driver-v722c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d30c10f47c", MAC:"3a:b6:53:75:a1:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:32.883679 containerd[1605]: 2025-10-13 05:48:32.880 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" Namespace="calico-system" Pod="csi-node-driver-v722c" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-csi--node--driver--v722c-eth0" Oct 13 05:48:32.896691 containerd[1605]: time="2025-10-13T05:48:32.896634598Z" level=info msg="connecting to shim d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106" address="unix:///run/containerd/s/90bb79d9d97498e6f07bc7842ce8b4e13ee04b67605146332a5b689652d97f97" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:32.915227 systemd[1]: Started cri-containerd-d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106.scope - libcontainer container d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106. Oct 13 05:48:32.934505 containerd[1605]: time="2025-10-13T05:48:32.934419133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v722c,Uid:49cbfab4-e583-4ac8-8a3f-5886ff5b0027,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106\"" Oct 13 05:48:33.534797 containerd[1605]: time="2025-10-13T05:48:33.534741803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfnd,Uid:5f0d0215-8783-49e1-90bb-729548648d44,Namespace:kube-system,Attempt:0,}" Oct 13 05:48:33.644751 systemd-networkd[1480]: cali3575f2d242b: Link UP Oct 13 05:48:33.644944 systemd-networkd[1480]: cali3575f2d242b: Gained carrier Oct 13 05:48:33.654183 containerd[1605]: 2025-10-13 05:48:33.570 [INFO][4493] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:33.654183 containerd[1605]: 2025-10-13 05:48:33.581 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0 coredns-668d6bf9bc- kube-system 5f0d0215-8783-49e1-90bb-729548648d44 777 0 2025-10-13 05:47:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e coredns-668d6bf9bc-4dfnd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3575f2d242b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-" Oct 13 05:48:33.654183 containerd[1605]: 2025-10-13 05:48:33.581 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.654183 containerd[1605]: 2025-10-13 05:48:33.616 [INFO][4505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" HandleID="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.617 [INFO][4505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" HandleID="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003904d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"coredns-668d6bf9bc-4dfnd", "timestamp":"2025-10-13 05:48:33.616980143 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.617 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.617 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.617 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.622 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.625 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.628 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.630 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.654504 containerd[1605]: 2025-10-13 05:48:33.631 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.631 [INFO][4505] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.633 [INFO][4505] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0 Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.637 [INFO][4505] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.640 [INFO][4505] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.5/26] block=192.168.76.0/26 handle="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.640 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.5/26] handle="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.640 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:33.655412 containerd[1605]: 2025-10-13 05:48:33.640 [INFO][4505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.5/26] IPv6=[] ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" HandleID="k8s-pod-network.cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.643 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5f0d0215-8783-49e1-90bb-729548648d44", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"coredns-668d6bf9bc-4dfnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3575f2d242b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.643 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.5/32] ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.643 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3575f2d242b ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.644 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.644 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5f0d0215-8783-49e1-90bb-729548648d44", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0", Pod:"coredns-668d6bf9bc-4dfnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3575f2d242b", MAC:"de:3e:86:84:a6:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:33.655560 containerd[1605]: 2025-10-13 05:48:33.651 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4dfnd" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--4dfnd-eth0" Oct 13 05:48:33.672895 containerd[1605]: time="2025-10-13T05:48:33.672844897Z" level=info msg="connecting to shim cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0" address="unix:///run/containerd/s/ce42279a9bf4cfad38929b2d723fa1b59eff3085011145794cd7b113345a6db7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:33.696169 systemd[1]: Started cri-containerd-cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0.scope - libcontainer container cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0. Oct 13 05:48:33.735992 containerd[1605]: time="2025-10-13T05:48:33.735813135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dfnd,Uid:5f0d0215-8783-49e1-90bb-729548648d44,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0\"" Oct 13 05:48:33.740428 containerd[1605]: time="2025-10-13T05:48:33.740398062Z" level=info msg="CreateContainer within sandbox \"cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:48:33.767737 containerd[1605]: time="2025-10-13T05:48:33.767013546Z" level=info msg="Container 8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:33.770446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939963627.mount: Deactivated successfully. Oct 13 05:48:33.773013 containerd[1605]: time="2025-10-13T05:48:33.772987456Z" level=info msg="CreateContainer within sandbox \"cc270b1f56b5183ca6626cab657e42ae6eeb2d241f65e1930af467a4a13db9a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155\"" Oct 13 05:48:33.774588 containerd[1605]: time="2025-10-13T05:48:33.774174719Z" level=info msg="StartContainer for \"8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155\"" Oct 13 05:48:33.775211 containerd[1605]: time="2025-10-13T05:48:33.775187975Z" level=info msg="connecting to shim 8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155" address="unix:///run/containerd/s/ce42279a9bf4cfad38929b2d723fa1b59eff3085011145794cd7b113345a6db7" protocol=ttrpc version=3 Oct 13 05:48:33.792198 systemd[1]: Started cri-containerd-8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155.scope - libcontainer container 8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155. Oct 13 05:48:33.814503 containerd[1605]: time="2025-10-13T05:48:33.814394194Z" level=info msg="StartContainer for \"8ca9aef50eb5e35a04ed0a8d7be9d314e32f8a139f413f8c1883f9b5dfea2155\" returns successfully" Oct 13 05:48:34.220427 systemd-networkd[1480]: calib5a6bc33c18: Gained IPv6LL Oct 13 05:48:34.350200 systemd-networkd[1480]: cali28af604fe26: Gained IPv6LL Oct 13 05:48:34.534470 containerd[1605]: time="2025-10-13T05:48:34.534442940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84877b6f9c-pmn7k,Uid:91163fe1-984e-4665-afab-2c238d953fe5,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:34.539942 containerd[1605]: time="2025-10-13T05:48:34.539876684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7bmmr,Uid:e15a9cb7-86a1-4119-ba3d-9eb41ba0200c,Namespace:calico-system,Attempt:0,}" Oct 13 05:48:34.636662 systemd-networkd[1480]: califbdfbe6c583: Link UP Oct 13 05:48:34.637603 systemd-networkd[1480]: califbdfbe6c583: Gained carrier Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.566 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.576 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0 goldmane-54d579b49d- calico-system e15a9cb7-86a1-4119-ba3d-9eb41ba0200c 788 0 2025-10-13 05:48:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e goldmane-54d579b49d-7bmmr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califbdfbe6c583 [] [] }} ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.576 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.597 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" HandleID="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Workload="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.597 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" HandleID="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Workload="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"goldmane-54d579b49d-7bmmr", "timestamp":"2025-10-13 05:48:34.597382377 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.597 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.597 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.597 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.603 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.608 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.612 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.613 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.615 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.616 [INFO][4645] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.617 [INFO][4645] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.623 [INFO][4645] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4645] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.6/26] block=192.168.76.0/26 handle="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.6/26] handle="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:34.652062 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.6/26] IPv6=[] ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" HandleID="k8s-pod-network.51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Workload="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.633 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"goldmane-54d579b49d-7bmmr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbdfbe6c583", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.633 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.6/32] ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.633 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbdfbe6c583 ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.638 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.638 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e15a9cb7-86a1-4119-ba3d-9eb41ba0200c", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f", Pod:"goldmane-54d579b49d-7bmmr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califbdfbe6c583", MAC:"de:d5:7e:3e:0e:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:34.652873 containerd[1605]: 2025-10-13 05:48:34.646 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" Namespace="calico-system" Pod="goldmane-54d579b49d-7bmmr" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-goldmane--54d579b49d--7bmmr-eth0" Oct 13 05:48:34.681287 containerd[1605]: time="2025-10-13T05:48:34.681248033Z" level=info msg="connecting to shim 51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f" address="unix:///run/containerd/s/7c24193093d8bd49bdffc4e793ec6d1b797c71259941d3c925e7018681037ee8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:34.704631 systemd[1]: Started cri-containerd-51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f.scope - libcontainer container 51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f. Oct 13 05:48:34.728925 kubelet[2744]: I1013 05:48:34.728683 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4dfnd" podStartSLOduration=36.728665154 podStartE2EDuration="36.728665154s" podCreationTimestamp="2025-10-13 05:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:48:34.728368566 +0000 UTC m=+41.271272409" watchObservedRunningTime="2025-10-13 05:48:34.728665154 +0000 UTC m=+41.271568997" Oct 13 05:48:34.774177 systemd-networkd[1480]: calicae11ecc8b6: Link UP Oct 13 05:48:34.774882 systemd-networkd[1480]: calicae11ecc8b6: Gained carrier Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.571 [INFO][4619] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.584 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0 calico-kube-controllers-84877b6f9c- calico-system 91163fe1-984e-4665-afab-2c238d953fe5 784 0 2025-10-13 05:48:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84877b6f9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e calico-kube-controllers-84877b6f9c-pmn7k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicae11ecc8b6 [] [] }} ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.584 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.627 [INFO][4651] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" HandleID="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.628 [INFO][4651] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" HandleID="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"calico-kube-controllers-84877b6f9c-pmn7k", "timestamp":"2025-10-13 05:48:34.627608982 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.628 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.630 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.703 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.713 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.722 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.727 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.737 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.737 [INFO][4651] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.741 [INFO][4651] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5 Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.746 [INFO][4651] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.761 [INFO][4651] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.7/26] block=192.168.76.0/26 handle="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.761 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.7/26] handle="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.761 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:34.788132 containerd[1605]: 2025-10-13 05:48:34.761 [INFO][4651] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.7/26] IPv6=[] ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" HandleID="k8s-pod-network.491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Workload="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.769 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0", GenerateName:"calico-kube-controllers-84877b6f9c-", Namespace:"calico-system", SelfLink:"", UID:"91163fe1-984e-4665-afab-2c238d953fe5", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84877b6f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"calico-kube-controllers-84877b6f9c-pmn7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicae11ecc8b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.769 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.7/32] ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.769 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicae11ecc8b6 ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.771 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.771 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0", GenerateName:"calico-kube-controllers-84877b6f9c-", Namespace:"calico-system", SelfLink:"", UID:"91163fe1-984e-4665-afab-2c238d953fe5", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84877b6f9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5", Pod:"calico-kube-controllers-84877b6f9c-pmn7k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicae11ecc8b6", MAC:"26:55:c6:6e:ce:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:34.788677 containerd[1605]: 2025-10-13 05:48:34.780 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" Namespace="calico-system" Pod="calico-kube-controllers-84877b6f9c-pmn7k" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-calico--kube--controllers--84877b6f9c--pmn7k-eth0" Oct 13 05:48:34.817781 containerd[1605]: time="2025-10-13T05:48:34.817285457Z" level=info msg="connecting to shim 491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5" address="unix:///run/containerd/s/c525344a4ce5946d7a9e73d2351252033b989cbd9927dafa7d5505e3c1496496" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:34.844969 containerd[1605]: time="2025-10-13T05:48:34.844945364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7bmmr,Uid:e15a9cb7-86a1-4119-ba3d-9eb41ba0200c,Namespace:calico-system,Attempt:0,} returns sandbox id \"51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f\"" Oct 13 05:48:34.845214 systemd[1]: Started cri-containerd-491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5.scope - libcontainer container 491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5. Oct 13 05:48:34.885706 containerd[1605]: time="2025-10-13T05:48:34.885683028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84877b6f9c-pmn7k,Uid:91163fe1-984e-4665-afab-2c238d953fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5\"" Oct 13 05:48:34.924264 systemd-networkd[1480]: cali2d30c10f47c: Gained IPv6LL Oct 13 05:48:34.988174 systemd-networkd[1480]: cali3575f2d242b: Gained IPv6LL Oct 13 05:48:35.256970 containerd[1605]: time="2025-10-13T05:48:35.256936969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:35.257965 containerd[1605]: time="2025-10-13T05:48:35.257924625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:48:35.259376 containerd[1605]: time="2025-10-13T05:48:35.259354478Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:35.261204 containerd[1605]: time="2025-10-13T05:48:35.261149011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:35.261769 containerd[1605]: time="2025-10-13T05:48:35.261724768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.482866871s" Oct 13 05:48:35.261802 containerd[1605]: time="2025-10-13T05:48:35.261777817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:48:35.263002 containerd[1605]: time="2025-10-13T05:48:35.262950853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:48:35.264197 containerd[1605]: time="2025-10-13T05:48:35.264186506Z" level=info msg="CreateContainer within sandbox \"3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:48:35.270123 containerd[1605]: time="2025-10-13T05:48:35.270092060Z" level=info msg="Container 95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:35.276508 containerd[1605]: time="2025-10-13T05:48:35.276419291Z" level=info msg="CreateContainer within sandbox \"3cec57129441c906171fbcc75eede9eba8dcd46483222a73b72fb0837d21b9a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0\"" Oct 13 05:48:35.277247 containerd[1605]: time="2025-10-13T05:48:35.277229498Z" level=info msg="StartContainer for \"95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0\"" Oct 13 05:48:35.278765 containerd[1605]: time="2025-10-13T05:48:35.278624511Z" level=info msg="connecting to shim 95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0" address="unix:///run/containerd/s/fe2d270e693d5a9f836369948b56d3e8f707f60ff44322cb25fe7733c6ceb70a" protocol=ttrpc version=3 Oct 13 05:48:35.300180 systemd[1]: Started cri-containerd-95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0.scope - libcontainer container 95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0. Oct 13 05:48:35.336896 containerd[1605]: time="2025-10-13T05:48:35.336867847Z" level=info msg="StartContainer for \"95086f64fd0845b1c5168af49bb616aa4e379859d8656bb75f91a4a33eceb2d0\" returns successfully" Oct 13 05:48:35.535229 containerd[1605]: time="2025-10-13T05:48:35.534990446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvxlh,Uid:ae76f9e1-2f01-4f66-93c3-1fecf29937c6,Namespace:kube-system,Attempt:0,}" Oct 13 05:48:35.669143 systemd-networkd[1480]: cali81cc9e1b6fd: Link UP Oct 13 05:48:35.669617 systemd-networkd[1480]: cali81cc9e1b6fd: Gained carrier Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.580 [INFO][4828] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.591 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0 coredns-668d6bf9bc- kube-system ae76f9e1-2f01-4f66-93c3-1fecf29937c6 786 0 2025-10-13 05:47:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-c-7af444862e coredns-668d6bf9bc-vvxlh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81cc9e1b6fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.591 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.622 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" HandleID="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.622 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" HandleID="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd9f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-c-7af444862e", "pod":"coredns-668d6bf9bc-vvxlh", "timestamp":"2025-10-13 05:48:35.62225998 +0000 UTC"}, Hostname:"ci-4459-1-0-c-7af444862e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.622 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.622 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.622 [INFO][4840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-c-7af444862e' Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.634 [INFO][4840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.639 [INFO][4840] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.644 [INFO][4840] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.646 [INFO][4840] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.648 [INFO][4840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.648 [INFO][4840] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.650 [INFO][4840] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153 Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.656 [INFO][4840] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.664 [INFO][4840] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.76.8/26] block=192.168.76.0/26 handle="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.664 [INFO][4840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.8/26] handle="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" host="ci-4459-1-0-c-7af444862e" Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.664 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:48:35.684361 containerd[1605]: 2025-10-13 05:48:35.664 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.76.8/26] IPv6=[] ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" HandleID="k8s-pod-network.02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Workload="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.666 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae76f9e1-2f01-4f66-93c3-1fecf29937c6", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"", Pod:"coredns-668d6bf9bc-vvxlh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81cc9e1b6fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.666 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.8/32] ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.666 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81cc9e1b6fd ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.670 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.670 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae76f9e1-2f01-4f66-93c3-1fecf29937c6", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-c-7af444862e", ContainerID:"02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153", Pod:"coredns-668d6bf9bc-vvxlh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81cc9e1b6fd", MAC:"92:78:27:53:60:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:48:35.685358 containerd[1605]: 2025-10-13 05:48:35.681 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" Namespace="kube-system" Pod="coredns-668d6bf9bc-vvxlh" WorkloadEndpoint="ci--4459--1--0--c--7af444862e-k8s-coredns--668d6bf9bc--vvxlh-eth0" Oct 13 05:48:35.705223 containerd[1605]: time="2025-10-13T05:48:35.705173724Z" level=info msg="connecting to shim 02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153" address="unix:///run/containerd/s/8a8190beef9d741b0bf4307e4ced554d3a79225cead2b0dd35a9c6e28f91b87b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:48:35.730431 systemd[1]: Started cri-containerd-02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153.scope - libcontainer container 02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153. Oct 13 05:48:35.770905 containerd[1605]: time="2025-10-13T05:48:35.770874995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vvxlh,Uid:ae76f9e1-2f01-4f66-93c3-1fecf29937c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153\"" Oct 13 05:48:35.774629 containerd[1605]: time="2025-10-13T05:48:35.774596319Z" level=info msg="CreateContainer within sandbox \"02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:48:35.787820 containerd[1605]: time="2025-10-13T05:48:35.787154271Z" level=info msg="Container 24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:35.793354 containerd[1605]: time="2025-10-13T05:48:35.793339063Z" level=info msg="CreateContainer within sandbox \"02e9f1108d85219e5d00d3f99797296e1924787e280f1294e7d5118e9a3c9153\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2\"" Oct 13 05:48:35.793853 containerd[1605]: time="2025-10-13T05:48:35.793840251Z" level=info msg="StartContainer for \"24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2\"" Oct 13 05:48:35.794350 containerd[1605]: time="2025-10-13T05:48:35.794335928Z" level=info msg="connecting to shim 24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2" address="unix:///run/containerd/s/8a8190beef9d741b0bf4307e4ced554d3a79225cead2b0dd35a9c6e28f91b87b" protocol=ttrpc version=3 Oct 13 05:48:35.810442 systemd[1]: Started cri-containerd-24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2.scope - libcontainer container 24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2. Oct 13 05:48:35.830425 containerd[1605]: time="2025-10-13T05:48:35.830380625Z" level=info msg="StartContainer for \"24d6016a47970abc4d2a4e2d7499c8506b2117c7454f93c89538fd8b019a89d2\" returns successfully" Oct 13 05:48:35.905143 containerd[1605]: time="2025-10-13T05:48:35.905102235Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:35.906652 containerd[1605]: time="2025-10-13T05:48:35.906626109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:48:35.907420 containerd[1605]: time="2025-10-13T05:48:35.907396775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 644.419173ms" Oct 13 05:48:35.907448 containerd[1605]: time="2025-10-13T05:48:35.907423225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:48:35.908352 containerd[1605]: time="2025-10-13T05:48:35.908330471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:48:35.910504 containerd[1605]: time="2025-10-13T05:48:35.910478031Z" level=info msg="CreateContainer within sandbox \"aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:48:35.921541 containerd[1605]: time="2025-10-13T05:48:35.921507761Z" level=info msg="Container 905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:35.937117 containerd[1605]: time="2025-10-13T05:48:35.936502583Z" level=info msg="CreateContainer within sandbox \"aba8e44e9ea9d104ce6943d174baca69031d5986d3b5cbc1639b2e6da52d294e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718\"" Oct 13 05:48:35.939049 containerd[1605]: time="2025-10-13T05:48:35.939020681Z" level=info msg="StartContainer for \"905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718\"" Oct 13 05:48:35.939998 containerd[1605]: time="2025-10-13T05:48:35.939984157Z" level=info msg="connecting to shim 905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718" address="unix:///run/containerd/s/a7f7177e226ac79998a277c997837afe939ba1d0b46537849a9d3b18110df478" protocol=ttrpc version=3 Oct 13 05:48:35.948179 systemd-networkd[1480]: calicae11ecc8b6: Gained IPv6LL Oct 13 05:48:35.981805 systemd[1]: Started cri-containerd-905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718.scope - libcontainer container 905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718. Oct 13 05:48:36.029091 containerd[1605]: time="2025-10-13T05:48:36.028250693Z" level=info msg="StartContainer for \"905f78f9c6ceee36f48287f920685c5b3b110b0ca1e6a20e8794845d7a1fd718\" returns successfully" Oct 13 05:48:36.396210 systemd-networkd[1480]: califbdfbe6c583: Gained IPv6LL Oct 13 05:48:36.746448 kubelet[2744]: I1013 05:48:36.746348 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fbdc59896-lrn4h" podStartSLOduration=24.701139285 podStartE2EDuration="27.746335116s" podCreationTimestamp="2025-10-13 05:48:09 +0000 UTC" firstStartedPulling="2025-10-13 05:48:32.862861381 +0000 UTC m=+39.405765214" lastFinishedPulling="2025-10-13 05:48:35.908057202 +0000 UTC m=+42.450961045" observedRunningTime="2025-10-13 05:48:36.744646413 +0000 UTC m=+43.287550246" watchObservedRunningTime="2025-10-13 05:48:36.746335116 +0000 UTC m=+43.289238959" Oct 13 05:48:36.747197 kubelet[2744]: I1013 05:48:36.746566 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fbdc59896-jhzdf" podStartSLOduration=25.262269121 podStartE2EDuration="27.746562845s" podCreationTimestamp="2025-10-13 05:48:09 +0000 UTC" firstStartedPulling="2025-10-13 05:48:32.778435069 +0000 UTC m=+39.321338902" lastFinishedPulling="2025-10-13 05:48:35.262728793 +0000 UTC m=+41.805632626" observedRunningTime="2025-10-13 05:48:35.732470269 +0000 UTC m=+42.275374112" watchObservedRunningTime="2025-10-13 05:48:36.746562845 +0000 UTC m=+43.289466678" Oct 13 05:48:36.749348 kubelet[2744]: I1013 05:48:36.721775 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:48:36.756886 kubelet[2744]: I1013 05:48:36.756841 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vvxlh" podStartSLOduration=38.756832161 podStartE2EDuration="38.756832161s" podCreationTimestamp="2025-10-13 05:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:48:36.754994489 +0000 UTC m=+43.297898332" watchObservedRunningTime="2025-10-13 05:48:36.756832161 +0000 UTC m=+43.299736004" Oct 13 05:48:37.164641 systemd-networkd[1480]: cali81cc9e1b6fd: Gained IPv6LL Oct 13 05:48:37.716900 containerd[1605]: time="2025-10-13T05:48:37.716837480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:37.727735 containerd[1605]: time="2025-10-13T05:48:37.727691337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:48:37.728698 containerd[1605]: time="2025-10-13T05:48:37.728617393Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:37.730881 containerd[1605]: time="2025-10-13T05:48:37.730829844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:37.731591 containerd[1605]: time="2025-10-13T05:48:37.731554311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.82319559s" Oct 13 05:48:37.731718 containerd[1605]: time="2025-10-13T05:48:37.731603111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:48:37.732917 containerd[1605]: time="2025-10-13T05:48:37.732736487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:48:37.735778 containerd[1605]: time="2025-10-13T05:48:37.735715204Z" level=info msg="CreateContainer within sandbox \"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:48:37.794285 containerd[1605]: time="2025-10-13T05:48:37.794241118Z" level=info msg="Container a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:37.848388 containerd[1605]: time="2025-10-13T05:48:37.848336779Z" level=info msg="CreateContainer within sandbox \"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01\"" Oct 13 05:48:37.849020 containerd[1605]: time="2025-10-13T05:48:37.848991276Z" level=info msg="StartContainer for \"a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01\"" Oct 13 05:48:37.850721 containerd[1605]: time="2025-10-13T05:48:37.850681220Z" level=info msg="connecting to shim a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01" address="unix:///run/containerd/s/90bb79d9d97498e6f07bc7842ce8b4e13ee04b67605146332a5b689652d97f97" protocol=ttrpc version=3 Oct 13 05:48:37.874209 systemd[1]: Started cri-containerd-a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01.scope - libcontainer container a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01. Oct 13 05:48:37.923192 containerd[1605]: time="2025-10-13T05:48:37.923165676Z" level=info msg="StartContainer for \"a69ebb10d821623942fa784e304b3b991f31eb8c0dc1452564661c8afa07cc01\" returns successfully" Oct 13 05:48:39.807525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195855905.mount: Deactivated successfully. Oct 13 05:48:41.001514 containerd[1605]: time="2025-10-13T05:48:41.001458495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:41.019544 containerd[1605]: time="2025-10-13T05:48:41.002327401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:48:41.025868 containerd[1605]: time="2025-10-13T05:48:41.025493448Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:41.028171 containerd[1605]: time="2025-10-13T05:48:41.028129739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:41.028787 containerd[1605]: time="2025-10-13T05:48:41.028373268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.295510262s" Oct 13 05:48:41.028787 containerd[1605]: time="2025-10-13T05:48:41.028394427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:48:41.029820 containerd[1605]: time="2025-10-13T05:48:41.029804224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:48:41.031057 containerd[1605]: time="2025-10-13T05:48:41.031037499Z" level=info msg="CreateContainer within sandbox \"51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:48:41.041399 containerd[1605]: time="2025-10-13T05:48:41.040005161Z" level=info msg="Container 382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:41.049099 containerd[1605]: time="2025-10-13T05:48:41.049029502Z" level=info msg="CreateContainer within sandbox \"51b858148c376b1b8ac863eed05b246850196cb56618fef3c8ceb4cb97aacd4f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\"" Oct 13 05:48:41.049779 containerd[1605]: time="2025-10-13T05:48:41.049696900Z" level=info msg="StartContainer for \"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\"" Oct 13 05:48:41.050694 containerd[1605]: time="2025-10-13T05:48:41.050674966Z" level=info msg="connecting to shim 382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9" address="unix:///run/containerd/s/7c24193093d8bd49bdffc4e793ec6d1b797c71259941d3c925e7018681037ee8" protocol=ttrpc version=3 Oct 13 05:48:41.092619 systemd[1]: Started cri-containerd-382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9.scope - libcontainer container 382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9. Oct 13 05:48:41.132966 containerd[1605]: time="2025-10-13T05:48:41.132928532Z" level=info msg="StartContainer for \"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" returns successfully" Oct 13 05:48:41.194339 kubelet[2744]: I1013 05:48:41.194012 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:48:41.955959 containerd[1605]: time="2025-10-13T05:48:41.955924893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"e29332c674c2e0505c774dee2bd9e1552884ceb6bfaacde22b70bb3272e5e84e\" pid:5188 exit_status:1 exited_at:{seconds:1760334521 nanos:941809669}" Oct 13 05:48:42.298240 systemd-networkd[1480]: vxlan.calico: Link UP Oct 13 05:48:42.298245 systemd-networkd[1480]: vxlan.calico: Gained carrier Oct 13 05:48:42.884319 containerd[1605]: time="2025-10-13T05:48:42.884278490Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"4dc766a0699982633a265dfbda6874d66451e26e41264211a3eb4c618ad60e8d\" pid:5328 exit_status:1 exited_at:{seconds:1760334522 nanos:883875201}" Oct 13 05:48:43.564225 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Oct 13 05:48:43.881999 containerd[1605]: time="2025-10-13T05:48:43.881899349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"1f0f6ae54fd39d392087749a30664dd0ecda717c956b97739e74704df0aeca42\" pid:5353 exit_status:1 exited_at:{seconds:1760334523 nanos:881050202}" Oct 13 05:48:45.328285 containerd[1605]: time="2025-10-13T05:48:45.328163977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:45.333524 containerd[1605]: time="2025-10-13T05:48:45.333491974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:48:45.335829 containerd[1605]: time="2025-10-13T05:48:45.335784718Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:45.342323 containerd[1605]: time="2025-10-13T05:48:45.342030831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:45.342323 containerd[1605]: time="2025-10-13T05:48:45.342285051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.312392758s" Oct 13 05:48:45.342850 containerd[1605]: time="2025-10-13T05:48:45.342456780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:48:45.343515 containerd[1605]: time="2025-10-13T05:48:45.343461927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:48:45.377950 containerd[1605]: time="2025-10-13T05:48:45.377339830Z" level=info msg="CreateContainer within sandbox \"491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:48:45.386156 containerd[1605]: time="2025-10-13T05:48:45.386131108Z" level=info msg="Container e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:45.395753 containerd[1605]: time="2025-10-13T05:48:45.395718233Z" level=info msg="CreateContainer within sandbox \"491e6fb1891358110432db1696a9fb1276b047746a9982b1c3fa58b28a471fb5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\"" Oct 13 05:48:45.397173 containerd[1605]: time="2025-10-13T05:48:45.396609230Z" level=info msg="StartContainer for \"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\"" Oct 13 05:48:45.398177 containerd[1605]: time="2025-10-13T05:48:45.398146737Z" level=info msg="connecting to shim e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d" address="unix:///run/containerd/s/c525344a4ce5946d7a9e73d2351252033b989cbd9927dafa7d5505e3c1496496" protocol=ttrpc version=3 Oct 13 05:48:45.421231 systemd[1]: Started cri-containerd-e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d.scope - libcontainer container e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d. Oct 13 05:48:45.479970 containerd[1605]: time="2025-10-13T05:48:45.479926406Z" level=info msg="StartContainer for \"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" returns successfully" Oct 13 05:48:45.837734 kubelet[2744]: I1013 05:48:45.837342 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-7bmmr" podStartSLOduration=28.654599573 podStartE2EDuration="34.837053564s" podCreationTimestamp="2025-10-13 05:48:11 +0000 UTC" firstStartedPulling="2025-10-13 05:48:34.846965354 +0000 UTC m=+41.389869197" lastFinishedPulling="2025-10-13 05:48:41.029419345 +0000 UTC m=+47.572323188" observedRunningTime="2025-10-13 05:48:41.790333176 +0000 UTC m=+48.333237009" watchObservedRunningTime="2025-10-13 05:48:45.837053564 +0000 UTC m=+52.379957407" Oct 13 05:48:45.837734 kubelet[2744]: I1013 05:48:45.837430 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84877b6f9c-pmn7k" podStartSLOduration=23.380995968 podStartE2EDuration="33.837423674s" podCreationTimestamp="2025-10-13 05:48:12 +0000 UTC" firstStartedPulling="2025-10-13 05:48:34.886845083 +0000 UTC m=+41.429748926" lastFinishedPulling="2025-10-13 05:48:45.343272799 +0000 UTC m=+51.886176632" observedRunningTime="2025-10-13 05:48:45.835765677 +0000 UTC m=+52.378669510" watchObservedRunningTime="2025-10-13 05:48:45.837423674 +0000 UTC m=+52.380327517" Oct 13 05:48:45.854018 containerd[1605]: time="2025-10-13T05:48:45.853993091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"58560244ee1a3913e3eaacfc6fc8022d33a538189fda037fd97e2277c2db32b0\" pid:5431 exited_at:{seconds:1760334525 nanos:853695542}" Oct 13 05:48:47.293463 containerd[1605]: time="2025-10-13T05:48:47.293335428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:47.294646 containerd[1605]: time="2025-10-13T05:48:47.294612554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:48:47.296171 containerd[1605]: time="2025-10-13T05:48:47.296126892Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:47.297963 containerd[1605]: time="2025-10-13T05:48:47.297940137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:48:47.298323 containerd[1605]: time="2025-10-13T05:48:47.298203917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.95471532s" Oct 13 05:48:47.298323 containerd[1605]: time="2025-10-13T05:48:47.298221987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:48:47.302648 containerd[1605]: time="2025-10-13T05:48:47.302594956Z" level=info msg="CreateContainer within sandbox \"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:48:47.309174 containerd[1605]: time="2025-10-13T05:48:47.309136211Z" level=info msg="Container 3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:48:47.325796 containerd[1605]: time="2025-10-13T05:48:47.325762533Z" level=info msg="CreateContainer within sandbox \"d5083a40db6d0da1abfae4233817cfabbfd5379e0bd233daa051fbc9af72a106\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3\"" Oct 13 05:48:47.326888 containerd[1605]: time="2025-10-13T05:48:47.326815380Z" level=info msg="StartContainer for \"3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3\"" Oct 13 05:48:47.328350 containerd[1605]: time="2025-10-13T05:48:47.328327786Z" level=info msg="connecting to shim 3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3" address="unix:///run/containerd/s/90bb79d9d97498e6f07bc7842ce8b4e13ee04b67605146332a5b689652d97f97" protocol=ttrpc version=3 Oct 13 05:48:47.356784 systemd[1]: Started cri-containerd-3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3.scope - libcontainer container 3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3. Oct 13 05:48:47.399181 containerd[1605]: time="2025-10-13T05:48:47.399153423Z" level=info msg="StartContainer for \"3f9cda5e9a69ad3fe3faf019d07bb24cdcd1a0be370327b42c437607c81058d3\" returns successfully" Oct 13 05:48:47.850030 kubelet[2744]: I1013 05:48:47.849666 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v722c" podStartSLOduration=21.48626829 podStartE2EDuration="35.849644018s" podCreationTimestamp="2025-10-13 05:48:12 +0000 UTC" firstStartedPulling="2025-10-13 05:48:32.935550487 +0000 UTC m=+39.478454320" lastFinishedPulling="2025-10-13 05:48:47.298926215 +0000 UTC m=+53.841830048" observedRunningTime="2025-10-13 05:48:47.848300992 +0000 UTC m=+54.391204855" watchObservedRunningTime="2025-10-13 05:48:47.849644018 +0000 UTC m=+54.392547881" Oct 13 05:48:47.909674 kubelet[2744]: I1013 05:48:47.906817 2744 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:48:47.912414 kubelet[2744]: I1013 05:48:47.912366 2744 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:48:54.770319 containerd[1605]: time="2025-10-13T05:48:54.770265636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"effc567f127a6ea5d71f92771c633132d47bc21591eb3518cfa5f8ef4b044fac\" pid:5506 exited_at:{seconds:1760334534 nanos:769982736}" Oct 13 05:48:57.795827 containerd[1605]: time="2025-10-13T05:48:57.795787005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"45b1bdad201629b180c715bbc439d031c7c1ee04742aa7067dbaf797b085e777\" pid:5528 exited_at:{seconds:1760334537 nanos:795350865}" Oct 13 05:48:58.839662 kubelet[2744]: I1013 05:48:58.839608 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:49:03.408969 containerd[1605]: time="2025-10-13T05:49:03.408817519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"a4cca7c2e46542d83da9221caa64c502735262e357751364be4ddb56723ad581\" pid:5568 exited_at:{seconds:1760334543 nanos:408569589}" Oct 13 05:49:14.063410 containerd[1605]: time="2025-10-13T05:49:14.063340076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"a8097dacadb5aa9474e0981f03675fee74fddafdba3a27bdfada0371f0fc2d5a\" pid:5590 exited_at:{seconds:1760334554 nanos:61446012}" Oct 13 05:49:15.880675 containerd[1605]: time="2025-10-13T05:49:15.880518316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"f4aed8e30808383725755652527d4a44939b5a747363db28844cf6a266ef42b9\" pid:5613 exited_at:{seconds:1760334555 nanos:880066308}" Oct 13 05:49:27.781707 containerd[1605]: time="2025-10-13T05:49:27.781654710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"fbfb93b2aeddd7d8bd08a8ccd55c320c03a9bd206c9119d8cce7a5906a712043\" pid:5642 exited_at:{seconds:1760334567 nanos:781332546}" Oct 13 05:49:43.983364 containerd[1605]: time="2025-10-13T05:49:43.983265233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"3c2440ec12f100d681f7a5b3f48c64ec1d1207095504d3706781d1c47062046e\" pid:5668 exited_at:{seconds:1760334583 nanos:982896777}" Oct 13 05:49:45.836572 containerd[1605]: time="2025-10-13T05:49:45.830778023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"ed87f7b8a9e9e84c98d8d89ffa3b8ebb7ae454b93c0924da4578c49473fab116\" pid:5694 exited_at:{seconds:1760334585 nanos:830465816}" Oct 13 05:49:51.729801 systemd[1]: Started sshd@9-65.108.221.100:22-147.75.109.163:50384.service - OpenSSH per-connection server daemon (147.75.109.163:50384). Oct 13 05:49:52.817371 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 50384 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:49:52.821637 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:49:52.831228 systemd-logind[1580]: New session 8 of user core. Oct 13 05:49:52.839413 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:49:54.101583 sshd[5713]: Connection closed by 147.75.109.163 port 50384 Oct 13 05:49:54.100762 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Oct 13 05:49:54.113894 systemd[1]: sshd@9-65.108.221.100:22-147.75.109.163:50384.service: Deactivated successfully. Oct 13 05:49:54.115583 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:49:54.115977 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:49:54.118562 systemd-logind[1580]: Removed session 8. Oct 13 05:49:54.776702 containerd[1605]: time="2025-10-13T05:49:54.776637769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"2bf04c53fbdef86227d922c96e6302cee9ebd28377c9ab149ae0a7ddefaa9e61\" pid:5741 exited_at:{seconds:1760334594 nanos:776278372}" Oct 13 05:49:57.769667 containerd[1605]: time="2025-10-13T05:49:57.769485450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"d22a73804fce8e87dc1b43ec7123c2005a9a67b8b8e05656609d0741509f8d03\" pid:5764 exited_at:{seconds:1760334597 nanos:769127693}" Oct 13 05:49:59.279543 systemd[1]: Started sshd@10-65.108.221.100:22-147.75.109.163:45994.service - OpenSSH per-connection server daemon (147.75.109.163:45994). Oct 13 05:50:00.358599 sshd[5777]: Accepted publickey for core from 147.75.109.163 port 45994 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:00.359960 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:00.363682 systemd-logind[1580]: New session 9 of user core. Oct 13 05:50:00.370233 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:50:01.232500 sshd[5782]: Connection closed by 147.75.109.163 port 45994 Oct 13 05:50:01.233429 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:01.237041 systemd[1]: sshd@10-65.108.221.100:22-147.75.109.163:45994.service: Deactivated successfully. Oct 13 05:50:01.238496 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:50:01.239108 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:50:01.240015 systemd-logind[1580]: Removed session 9. Oct 13 05:50:03.419996 containerd[1605]: time="2025-10-13T05:50:03.419935319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"11f8c0285bd65e8a775797ee21263d13028b46960acf3fd8f2f3cb9b7f2e7136\" pid:5815 exited_at:{seconds:1760334603 nanos:404847546}" Oct 13 05:50:06.447338 systemd[1]: Started sshd@11-65.108.221.100:22-147.75.109.163:34018.service - OpenSSH per-connection server daemon (147.75.109.163:34018). Oct 13 05:50:07.584340 sshd[5825]: Accepted publickey for core from 147.75.109.163 port 34018 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:07.586236 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:07.593801 systemd-logind[1580]: New session 10 of user core. Oct 13 05:50:07.604326 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:50:08.437240 sshd[5828]: Connection closed by 147.75.109.163 port 34018 Oct 13 05:50:08.438865 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:08.444047 systemd[1]: sshd@11-65.108.221.100:22-147.75.109.163:34018.service: Deactivated successfully. Oct 13 05:50:08.446778 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:50:08.449212 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:50:08.452448 systemd-logind[1580]: Removed session 10. Oct 13 05:50:08.632936 systemd[1]: Started sshd@12-65.108.221.100:22-147.75.109.163:34034.service - OpenSSH per-connection server daemon (147.75.109.163:34034). Oct 13 05:50:09.786572 sshd[5841]: Accepted publickey for core from 147.75.109.163 port 34034 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:09.789752 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:09.799549 systemd-logind[1580]: New session 11 of user core. Oct 13 05:50:09.805428 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:50:10.697220 sshd[5844]: Connection closed by 147.75.109.163 port 34034 Oct 13 05:50:10.700595 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:10.705564 systemd[1]: sshd@12-65.108.221.100:22-147.75.109.163:34034.service: Deactivated successfully. Oct 13 05:50:10.707531 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:50:10.708194 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:50:10.709648 systemd-logind[1580]: Removed session 11. Oct 13 05:50:10.860934 systemd[1]: Started sshd@13-65.108.221.100:22-147.75.109.163:34044.service - OpenSSH per-connection server daemon (147.75.109.163:34044). Oct 13 05:50:11.933722 sshd[5858]: Accepted publickey for core from 147.75.109.163 port 34044 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:11.936495 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:11.946100 systemd-logind[1580]: New session 12 of user core. Oct 13 05:50:11.953278 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:50:12.701725 sshd[5861]: Connection closed by 147.75.109.163 port 34044 Oct 13 05:50:12.702236 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:12.705215 systemd[1]: sshd@13-65.108.221.100:22-147.75.109.163:34044.service: Deactivated successfully. Oct 13 05:50:12.706669 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:50:12.707621 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:50:12.708648 systemd-logind[1580]: Removed session 12. Oct 13 05:50:13.915780 containerd[1605]: time="2025-10-13T05:50:13.915735201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"b19f17dfb9fc0a66a6d2ab64664e2bf70765473f0ddacc0709955a0c4350528e\" pid:5892 exited_at:{seconds:1760334613 nanos:915449754}" Oct 13 05:50:15.863286 containerd[1605]: time="2025-10-13T05:50:15.863064394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"73d54e92f5936236ffd90f9bc28a5b78e4eac3147e647710959b51373b4ced71\" pid:5914 exited_at:{seconds:1760334615 nanos:862856825}" Oct 13 05:50:17.870609 systemd[1]: Started sshd@14-65.108.221.100:22-147.75.109.163:55982.service - OpenSSH per-connection server daemon (147.75.109.163:55982). Oct 13 05:50:18.912228 sshd[5925]: Accepted publickey for core from 147.75.109.163 port 55982 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:18.913777 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:18.918909 systemd-logind[1580]: New session 13 of user core. Oct 13 05:50:18.925222 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:50:19.756468 sshd[5941]: Connection closed by 147.75.109.163 port 55982 Oct 13 05:50:19.759113 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:19.765921 systemd[1]: sshd@14-65.108.221.100:22-147.75.109.163:55982.service: Deactivated successfully. Oct 13 05:50:19.767612 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:50:19.768862 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:50:19.770481 systemd-logind[1580]: Removed session 13. Oct 13 05:50:19.932223 systemd[1]: Started sshd@15-65.108.221.100:22-147.75.109.163:55994.service - OpenSSH per-connection server daemon (147.75.109.163:55994). Oct 13 05:50:20.976290 sshd[5953]: Accepted publickey for core from 147.75.109.163 port 55994 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:20.978929 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:20.988691 systemd-logind[1580]: New session 14 of user core. Oct 13 05:50:20.994395 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:50:21.957462 sshd[5956]: Connection closed by 147.75.109.163 port 55994 Oct 13 05:50:21.960742 sshd-session[5953]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:21.966106 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:50:21.966377 systemd[1]: sshd@15-65.108.221.100:22-147.75.109.163:55994.service: Deactivated successfully. Oct 13 05:50:21.967626 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:50:21.969498 systemd-logind[1580]: Removed session 14. Oct 13 05:50:22.135615 systemd[1]: Started sshd@16-65.108.221.100:22-147.75.109.163:41470.service - OpenSSH per-connection server daemon (147.75.109.163:41470). Oct 13 05:50:23.198632 sshd[5966]: Accepted publickey for core from 147.75.109.163 port 41470 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:23.199720 sshd-session[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:23.203386 systemd-logind[1580]: New session 15 of user core. Oct 13 05:50:23.207249 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:50:24.644008 sshd[5969]: Connection closed by 147.75.109.163 port 41470 Oct 13 05:50:24.660501 sshd-session[5966]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:24.673216 systemd[1]: sshd@16-65.108.221.100:22-147.75.109.163:41470.service: Deactivated successfully. Oct 13 05:50:24.677044 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:50:24.678793 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:50:24.681313 systemd-logind[1580]: Removed session 15. Oct 13 05:50:24.848319 systemd[1]: Started sshd@17-65.108.221.100:22-147.75.109.163:41472.service - OpenSSH per-connection server daemon (147.75.109.163:41472). Oct 13 05:50:25.970366 sshd[5987]: Accepted publickey for core from 147.75.109.163 port 41472 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:25.972968 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:25.977307 systemd-logind[1580]: New session 16 of user core. Oct 13 05:50:25.984302 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:50:27.101439 sshd[5990]: Connection closed by 147.75.109.163 port 41472 Oct 13 05:50:27.103857 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:27.111966 systemd[1]: sshd@17-65.108.221.100:22-147.75.109.163:41472.service: Deactivated successfully. Oct 13 05:50:27.114661 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:50:27.117614 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:50:27.121313 systemd-logind[1580]: Removed session 16. Oct 13 05:50:27.254387 systemd[1]: Started sshd@18-65.108.221.100:22-147.75.109.163:41484.service - OpenSSH per-connection server daemon (147.75.109.163:41484). Oct 13 05:50:28.039098 containerd[1605]: time="2025-10-13T05:50:28.039039534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"26c79779eb4c374cf10d976a5673086674e94af1cc33516dbe2edfcf3621164f\" pid:6015 exited_at:{seconds:1760334628 nanos:38758006}" Oct 13 05:50:28.295260 sshd[6000]: Accepted publickey for core from 147.75.109.163 port 41484 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:28.296924 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:28.305280 systemd-logind[1580]: New session 17 of user core. Oct 13 05:50:28.310282 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:50:29.217977 sshd[6027]: Connection closed by 147.75.109.163 port 41484 Oct 13 05:50:29.219255 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:29.223901 systemd[1]: sshd@18-65.108.221.100:22-147.75.109.163:41484.service: Deactivated successfully. Oct 13 05:50:29.225763 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:50:29.227446 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:50:29.228826 systemd-logind[1580]: Removed session 17. Oct 13 05:50:34.431708 systemd[1]: Started sshd@19-65.108.221.100:22-147.75.109.163:55398.service - OpenSSH per-connection server daemon (147.75.109.163:55398). Oct 13 05:50:35.599109 sshd[6044]: Accepted publickey for core from 147.75.109.163 port 55398 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:35.601470 sshd-session[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:35.605651 systemd-logind[1580]: New session 18 of user core. Oct 13 05:50:35.613361 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:50:36.606064 sshd[6047]: Connection closed by 147.75.109.163 port 55398 Oct 13 05:50:36.609968 sshd-session[6044]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:36.624785 systemd[1]: sshd@19-65.108.221.100:22-147.75.109.163:55398.service: Deactivated successfully. Oct 13 05:50:36.627992 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:50:36.629659 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:50:36.632467 systemd-logind[1580]: Removed session 18. Oct 13 05:50:41.765695 systemd[1]: Started sshd@20-65.108.221.100:22-147.75.109.163:55410.service - OpenSSH per-connection server daemon (147.75.109.163:55410). Oct 13 05:50:42.810719 sshd[6059]: Accepted publickey for core from 147.75.109.163 port 55410 ssh2: RSA SHA256:KFAK6EB3kO3KxWVPVEarZXWmjKqEtQlbDzP/aPS0LzU Oct 13 05:50:42.812609 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:50:42.818148 systemd-logind[1580]: New session 19 of user core. Oct 13 05:50:42.824280 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:50:43.618965 sshd[6062]: Connection closed by 147.75.109.163 port 55410 Oct 13 05:50:43.621614 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Oct 13 05:50:43.627697 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:50:43.628208 systemd[1]: sshd@20-65.108.221.100:22-147.75.109.163:55410.service: Deactivated successfully. Oct 13 05:50:43.629505 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:50:43.631702 systemd-logind[1580]: Removed session 19. Oct 13 05:50:43.930660 containerd[1605]: time="2025-10-13T05:50:43.930476421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"f98983b4f9fc0f408eb68b2877fd40e8a23021f3adf167edd4b29d73ad8a26e0\" pid:6086 exited_at:{seconds:1760334643 nanos:929976563}" Oct 13 05:50:45.848923 containerd[1605]: time="2025-10-13T05:50:45.848866505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"dc04f72275d2e5b185885712af27e6a1cdc78e358ebfcf6234ab34871b82faae\" pid:6108 exited_at:{seconds:1760334645 nanos:848592186}" Oct 13 05:50:54.750971 containerd[1605]: time="2025-10-13T05:50:54.750925755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"64e546af8cc959cb5c5265953c8a34b8840a3d52e556bcd9bc82456726fe1898\" pid:6132 exited_at:{seconds:1760334654 nanos:750588016}" Oct 13 05:50:57.828352 containerd[1605]: time="2025-10-13T05:50:57.828307004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"14712b6d96b03b065093fcae72cfff938adcef27f8a9211ede6a8db69ee12369\" pid:6154 exited_at:{seconds:1760334657 nanos:827967905}" Oct 13 05:51:03.369178 containerd[1605]: time="2025-10-13T05:51:03.369130813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"b019f17c91ca3070edf583a12cb99ef907ae5a1abed1eacc64641622a136830a\" pid:6180 exited_at:{seconds:1760334663 nanos:368897544}" Oct 13 05:51:13.852701 containerd[1605]: time="2025-10-13T05:51:13.852642524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"382d263b73024c23d8427fb3de21e3f68f6d76b84363a5093e81ddadebb5cfa9\" id:\"d869cfde79941669f0741b15003e113e831468fd954cf3a38e39c1f560a69135\" pid:6200 exited_at:{seconds:1760334673 nanos:852382785}" Oct 13 05:51:15.864732 containerd[1605]: time="2025-10-13T05:51:15.864630270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e92890728c58b861e38c343048bd573b4f67dc1d28a30608c6706df96c1b561d\" id:\"7f37adec4378013554cb60c38de9b89668724c1f99e7d2a25be616cb9c9d6144\" pid:6222 exit_status:1 exited_at:{seconds:1760334675 nanos:864377681}" Oct 13 05:51:17.420166 systemd[1]: cri-containerd-284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b.scope: Deactivated successfully. Oct 13 05:51:17.420687 systemd[1]: cri-containerd-284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b.scope: Consumed 12.439s CPU time, 117.5M memory peak, 67.5M read from disk. Oct 13 05:51:17.532016 containerd[1605]: time="2025-10-13T05:51:17.531381911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" id:\"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" pid:3082 exit_status:1 exited_at:{seconds:1760334677 nanos:514219625}" Oct 13 05:51:17.540861 containerd[1605]: time="2025-10-13T05:51:17.540834885Z" level=info msg="received exit event container_id:\"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" id:\"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" pid:3082 exit_status:1 exited_at:{seconds:1760334677 nanos:514219625}" Oct 13 05:51:17.564223 systemd[1]: cri-containerd-41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727.scope: Deactivated successfully. Oct 13 05:51:17.564425 systemd[1]: cri-containerd-41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727.scope: Consumed 3.352s CPU time, 86.2M memory peak, 93.2M read from disk. Oct 13 05:51:17.571278 containerd[1605]: time="2025-10-13T05:51:17.568409302Z" level=info msg="received exit event container_id:\"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\" id:\"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\" pid:2602 exit_status:1 exited_at:{seconds:1760334677 nanos:568138683}" Oct 13 05:51:17.577104 containerd[1605]: time="2025-10-13T05:51:17.577066639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\" id:\"41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727\" pid:2602 exit_status:1 exited_at:{seconds:1760334677 nanos:568138683}" Oct 13 05:51:17.625059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b-rootfs.mount: Deactivated successfully. Oct 13 05:51:17.628407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727-rootfs.mount: Deactivated successfully. Oct 13 05:51:17.631209 systemd[1]: cri-containerd-2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3.scope: Deactivated successfully. Oct 13 05:51:17.631378 systemd[1]: cri-containerd-2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3.scope: Consumed 1.833s CPU time, 38.5M memory peak, 53.2M read from disk. Oct 13 05:51:17.638432 containerd[1605]: time="2025-10-13T05:51:17.638351119Z" level=info msg="received exit event container_id:\"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\" id:\"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\" pid:2583 exit_status:1 exited_at:{seconds:1760334677 nanos:633804606}" Oct 13 05:51:17.638805 containerd[1605]: time="2025-10-13T05:51:17.638690948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\" id:\"2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3\" pid:2583 exit_status:1 exited_at:{seconds:1760334677 nanos:633804606}" Oct 13 05:51:17.649336 kubelet[2744]: E1013 05:51:17.649307 2744 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53120->10.0.0.2:2379: read: connection timed out" Oct 13 05:51:17.681430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3-rootfs.mount: Deactivated successfully. Oct 13 05:51:18.337117 kubelet[2744]: I1013 05:51:18.337060 2744 scope.go:117] "RemoveContainer" containerID="2bf6e730d70d695aab5dfa8114005c969151a0e842a55c78098ffe8de7e72ac3" Oct 13 05:51:18.340103 kubelet[2744]: I1013 05:51:18.339990 2744 scope.go:117] "RemoveContainer" containerID="284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b" Oct 13 05:51:18.340103 kubelet[2744]: I1013 05:51:18.340109 2744 scope.go:117] "RemoveContainer" containerID="41d16b1e5273477484fbea0be187769216e2755d083a9f81073934d6796aa727" Oct 13 05:51:18.362615 containerd[1605]: time="2025-10-13T05:51:18.362480150Z" level=info msg="CreateContainer within sandbox \"3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 13 05:51:18.362615 containerd[1605]: time="2025-10-13T05:51:18.362579879Z" level=info msg="CreateContainer within sandbox \"35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 13 05:51:18.365400 containerd[1605]: time="2025-10-13T05:51:18.362481040Z" level=info msg="CreateContainer within sandbox \"3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 13 05:51:18.440735 containerd[1605]: time="2025-10-13T05:51:18.440705286Z" level=info msg="Container 2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:18.442088 containerd[1605]: time="2025-10-13T05:51:18.441811102Z" level=info msg="Container a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:18.443926 containerd[1605]: time="2025-10-13T05:51:18.443913764Z" level=info msg="Container 0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:51:18.462227 containerd[1605]: time="2025-10-13T05:51:18.462184426Z" level=info msg="CreateContainer within sandbox \"35e4cf25520846650a360a90b861a3a92dbfd2deb8e7dd7e48aa2521373e9e11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b\"" Oct 13 05:51:18.467090 containerd[1605]: time="2025-10-13T05:51:18.467008777Z" level=info msg="CreateContainer within sandbox \"3d342203ca3d6aca55ee8506b67addc1ec5003406823a4d2ec3408467d69a3ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7\"" Oct 13 05:51:18.467170 containerd[1605]: time="2025-10-13T05:51:18.467117987Z" level=info msg="StartContainer for \"2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b\"" Oct 13 05:51:18.467540 containerd[1605]: time="2025-10-13T05:51:18.467496826Z" level=info msg="CreateContainer within sandbox \"3536050bbe294242b53e3953c6ecdce6f01b49130ba97db2929b8016cfa0b8b0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\"" Oct 13 05:51:18.468217 containerd[1605]: time="2025-10-13T05:51:18.467868965Z" level=info msg="connecting to shim 2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b" address="unix:///run/containerd/s/eee079ae0adfb975cd1419b6b9317db904ea26e10692549ad0dc28198b4b4de9" protocol=ttrpc version=3 Oct 13 05:51:18.468217 containerd[1605]: time="2025-10-13T05:51:18.467931894Z" level=info msg="StartContainer for \"a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7\"" Oct 13 05:51:18.469169 containerd[1605]: time="2025-10-13T05:51:18.469126350Z" level=info msg="connecting to shim a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7" address="unix:///run/containerd/s/50a224a316bd19690934ca8e2697fde4a5e359290a52b7120e8afa6a017c8332" protocol=ttrpc version=3 Oct 13 05:51:18.469309 containerd[1605]: time="2025-10-13T05:51:18.469230870Z" level=info msg="StartContainer for \"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\"" Oct 13 05:51:18.469866 containerd[1605]: time="2025-10-13T05:51:18.469836517Z" level=info msg="connecting to shim 0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523" address="unix:///run/containerd/s/c1f960cdfa871ee5ddfc6018ed166fd6c3379dbe44b4b37e1b1ab899619e54fa" protocol=ttrpc version=3 Oct 13 05:51:18.535431 systemd[1]: Started cri-containerd-0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523.scope - libcontainer container 0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523. Oct 13 05:51:18.545365 systemd[1]: Started cri-containerd-2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b.scope - libcontainer container 2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b. Oct 13 05:51:18.553391 systemd[1]: Started cri-containerd-a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7.scope - libcontainer container a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7. Oct 13 05:51:18.622357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631845834.mount: Deactivated successfully. Oct 13 05:51:18.622632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884134971.mount: Deactivated successfully. Oct 13 05:51:18.647528 containerd[1605]: time="2025-10-13T05:51:18.647147442Z" level=info msg="StartContainer for \"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\" returns successfully" Oct 13 05:51:18.652907 containerd[1605]: time="2025-10-13T05:51:18.652761311Z" level=info msg="StartContainer for \"2222bd01f51ed238e9b644a6a6c109e4ccf7611cf16ce12c6e4f7c66f19b1c9b\" returns successfully" Oct 13 05:51:18.661600 containerd[1605]: time="2025-10-13T05:51:18.661531478Z" level=info msg="StartContainer for \"a5a68de592ea8dc85a117984db42aa9191eddce245c3d82ecd73ca9d44bb3ca7\" returns successfully" Oct 13 05:51:20.216249 kubelet[2744]: E1013 05:51:20.198371 2744 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-1-0-c-7af444862e.186df70e4b18a5fe kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-1-0-c-7af444862e,UID:f6678e7e2584f996563ab0ae51547246,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-c-7af444862e,},FirstTimestamp:2025-10-13 05:51:10.133241342 +0000 UTC m=+196.676145265,LastTimestamp:2025-10-13 05:51:10.133241342 +0000 UTC m=+196.676145265,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-c-7af444862e,}" Oct 13 05:51:22.286305 kubelet[2744]: I1013 05:51:22.286252 2744 status_manager.go:890] "Failed to get status for pod" podUID="f6678e7e2584f996563ab0ae51547246" pod="kube-system/kube-apiserver-ci-4459-1-0-c-7af444862e" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:53066->10.0.0.2:2379: read: connection timed out" Oct 13 05:51:27.659744 kubelet[2744]: E1013 05:51:27.659679 2744 controller.go:195] "Failed to update lease" err="Put \"https://65.108.221.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-7af444862e?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 05:51:27.793347 containerd[1605]: time="2025-10-13T05:51:27.793291793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b51500357d686159c7989a21aadb4ea158062f710731900b21755e70a74db08f\" id:\"accef27a5ac8371b30bb2f3d7ddd3102c555112a0f6338f7a8822cfb00ff2d95\" pid:6386 exited_at:{seconds:1760334687 nanos:792905815}" Oct 13 05:51:30.131369 systemd[1]: cri-containerd-0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523.scope: Deactivated successfully. Oct 13 05:51:30.132016 systemd[1]: cri-containerd-0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523.scope: Consumed 189ms CPU time, 73M memory peak, 37.9M read from disk. Oct 13 05:51:30.132621 containerd[1605]: time="2025-10-13T05:51:30.132467885Z" level=info msg="received exit event container_id:\"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\" id:\"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\" pid:6309 exit_status:1 exited_at:{seconds:1760334690 nanos:131894856}" Oct 13 05:51:30.132891 containerd[1605]: time="2025-10-13T05:51:30.132726745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\" id:\"0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523\" pid:6309 exit_status:1 exited_at:{seconds:1760334690 nanos:131894856}" Oct 13 05:51:30.168291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523-rootfs.mount: Deactivated successfully. Oct 13 05:51:30.380207 kubelet[2744]: I1013 05:51:30.380120 2744 scope.go:117] "RemoveContainer" containerID="284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b" Oct 13 05:51:30.380961 kubelet[2744]: I1013 05:51:30.380932 2744 scope.go:117] "RemoveContainer" containerID="0685b383590663fc8af1eeba5aea3aa9c56a6534ee9f5c65e5497e85ecc6a523" Oct 13 05:51:30.396970 kubelet[2744]: E1013 05:51:30.396646 2744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-jp2rf_tigera-operator(bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6)\"" pod="tigera-operator/tigera-operator-755d956888-jp2rf" podUID="bda66ed4-dddd-4f76-bc7d-65e7b18a5dd6" Oct 13 05:51:30.454131 containerd[1605]: time="2025-10-13T05:51:30.454083718Z" level=info msg="RemoveContainer for \"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\"" Oct 13 05:51:30.463465 containerd[1605]: time="2025-10-13T05:51:30.463418714Z" level=info msg="RemoveContainer for \"284bc9a42d67090ad2f9bcc46d3ab09c9bef4e75c252f11ed362501519a6560b\" returns successfully" Oct 13 05:51:37.663537 kubelet[2744]: E1013 05:51:37.663349 2744 controller.go:195] "Failed to update lease" err="Put \"https://65.108.221.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-c-7af444862e?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"