Dec 12 18:37:28.867700 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:37:28.867722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:37:28.867734 kernel: BIOS-provided physical RAM map: Dec 12 18:37:28.867740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:37:28.867747 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:37:28.867753 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:37:28.867761 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 12 18:37:28.867767 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 12 18:37:28.867774 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:37:28.867780 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:37:28.867787 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:37:28.867796 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:37:28.867802 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:37:28.867809 kernel: NX (Execute Disable) protection: active Dec 12 18:37:28.867817 kernel: APIC: Static calls initialized Dec 12 18:37:28.867824 kernel: SMBIOS 2.8 present. Dec 12 18:37:28.867833 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 12 18:37:28.867840 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:37:28.867847 kernel: Hypervisor detected: KVM Dec 12 18:37:28.867854 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 12 18:37:28.867861 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:37:28.867868 kernel: kvm-clock: using sched offset of 3958305275 cycles Dec 12 18:37:28.867876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:37:28.867883 kernel: tsc: Detected 2794.748 MHz processor Dec 12 18:37:28.867890 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:37:28.867898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:37:28.867907 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 12 18:37:28.867914 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:37:28.867922 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:37:28.867929 kernel: Using GB pages for direct mapping Dec 12 18:37:28.867936 kernel: ACPI: Early table checksum verification disabled Dec 12 18:37:28.867943 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 12 18:37:28.867951 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.867958 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.867965 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.867974 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 12 18:37:28.867981 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.867989 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.867996 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.868003 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:37:28.868013 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Dec 12 18:37:28.868021 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Dec 12 18:37:28.868030 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 12 18:37:28.868038 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Dec 12 18:37:28.868045 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Dec 12 18:37:28.868053 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Dec 12 18:37:28.868060 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Dec 12 18:37:28.868067 kernel: No NUMA configuration found Dec 12 18:37:28.868075 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 12 18:37:28.868084 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Dec 12 18:37:28.868092 kernel: Zone ranges: Dec 12 18:37:28.868099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:37:28.868106 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 12 18:37:28.868114 kernel: Normal empty Dec 12 18:37:28.868121 kernel: Device empty Dec 12 18:37:28.868128 kernel: Movable zone start for each node Dec 12 18:37:28.868136 kernel: Early memory node ranges Dec 12 18:37:28.868143 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:37:28.868152 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 12 18:37:28.868179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 12 18:37:28.868186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:37:28.868194 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:37:28.868201 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 12 18:37:28.868208 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:37:28.868216 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:37:28.868223 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:37:28.868231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:37:28.868238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:37:28.868248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:37:28.868256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:37:28.868263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:37:28.868270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:37:28.868278 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:37:28.868285 kernel: TSC deadline timer available Dec 12 18:37:28.868293 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:37:28.868300 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:37:28.868307 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:37:28.868317 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:37:28.868324 kernel: CPU topo: Num. cores per package: 4 Dec 12 18:37:28.868331 kernel: CPU topo: Num. threads per package: 4 Dec 12 18:37:28.868339 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 12 18:37:28.868346 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:37:28.868353 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:37:28.868361 kernel: kvm-guest: setup PV sched yield Dec 12 18:37:28.868369 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:37:28.868376 kernel: Booting paravirtualized kernel on KVM Dec 12 18:37:28.868384 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:37:28.868393 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 12 18:37:28.868401 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 12 18:37:28.868408 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 12 18:37:28.868416 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 12 18:37:28.868423 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:37:28.868430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:37:28.868439 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:37:28.868447 kernel: random: crng init done Dec 12 18:37:28.868456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:37:28.868463 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:37:28.868471 kernel: Fallback order for Node 0: 0 Dec 12 18:37:28.868478 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Dec 12 18:37:28.868486 kernel: Policy zone: DMA32 Dec 12 18:37:28.868493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:37:28.868500 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 18:37:28.868508 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:37:28.868515 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:37:28.868525 kernel: Dynamic Preempt: voluntary Dec 12 18:37:28.868532 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:37:28.868550 kernel: rcu: RCU event tracing is enabled. Dec 12 18:37:28.868561 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 18:37:28.868571 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:37:28.868580 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:37:28.868598 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:37:28.868606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:37:28.868613 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 18:37:28.868621 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:37:28.868631 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:37:28.868638 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:37:28.868646 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 12 18:37:28.868654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:37:28.868668 kernel: Console: colour VGA+ 80x25 Dec 12 18:37:28.868677 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:37:28.868685 kernel: ACPI: Core revision 20240827 Dec 12 18:37:28.868693 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:37:28.868700 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:37:28.868708 kernel: x2apic enabled Dec 12 18:37:28.868716 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:37:28.868726 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:37:28.868734 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:37:28.868741 kernel: kvm-guest: setup PV IPIs Dec 12 18:37:28.868749 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:37:28.868757 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 12 18:37:28.868767 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 12 18:37:28.868775 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:37:28.868782 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:37:28.868790 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:37:28.868798 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:37:28.868805 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:37:28.868813 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:37:28.868821 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 12 18:37:28.868828 kernel: active return thunk: retbleed_return_thunk Dec 12 18:37:28.868838 kernel: RETBleed: Mitigation: untrained return thunk Dec 12 18:37:28.868846 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:37:28.868853 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:37:28.868861 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:37:28.868869 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:37:28.868877 kernel: active return thunk: srso_return_thunk Dec 12 18:37:28.868885 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:37:28.868893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:37:28.868902 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:37:28.868910 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:37:28.868917 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:37:28.868925 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 12 18:37:28.868933 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:37:28.868940 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:37:28.868948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:37:28.868956 kernel: landlock: Up and running. Dec 12 18:37:28.868964 kernel: SELinux: Initializing. Dec 12 18:37:28.868973 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:37:28.868981 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:37:28.868989 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 12 18:37:28.868997 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:37:28.869005 kernel: ... version: 0 Dec 12 18:37:28.869012 kernel: ... bit width: 48 Dec 12 18:37:28.869020 kernel: ... generic registers: 6 Dec 12 18:37:28.869027 kernel: ... value mask: 0000ffffffffffff Dec 12 18:37:28.869035 kernel: ... max period: 00007fffffffffff Dec 12 18:37:28.869045 kernel: ... fixed-purpose events: 0 Dec 12 18:37:28.869052 kernel: ... event mask: 000000000000003f Dec 12 18:37:28.869060 kernel: signal: max sigframe size: 1776 Dec 12 18:37:28.869067 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:37:28.869075 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:37:28.869083 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:37:28.869091 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:37:28.869098 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:37:28.869106 kernel: .... node #0, CPUs: #1 #2 #3 Dec 12 18:37:28.869115 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 18:37:28.869123 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 12 18:37:28.869131 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145096K reserved, 0K cma-reserved) Dec 12 18:37:28.869139 kernel: devtmpfs: initialized Dec 12 18:37:28.869146 kernel: x86/mm: Memory block size: 128MB Dec 12 18:37:28.869154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:37:28.869172 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 18:37:28.869180 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:37:28.869188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:37:28.869198 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:37:28.869205 kernel: audit: type=2000 audit(1765564645.088:1): state=initialized audit_enabled=0 res=1 Dec 12 18:37:28.869213 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:37:28.869221 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:37:28.869228 kernel: cpuidle: using governor menu Dec 12 18:37:28.869236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:37:28.869244 kernel: dca service started, version 1.12.1 Dec 12 18:37:28.869251 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:37:28.869259 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:37:28.869269 kernel: PCI: Using configuration type 1 for base access Dec 12 18:37:28.869276 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:37:28.869284 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:37:28.869292 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:37:28.869299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:37:28.869307 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:37:28.869314 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:37:28.869322 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:37:28.869330 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:37:28.869340 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:37:28.869347 kernel: ACPI: Interpreter enabled Dec 12 18:37:28.869355 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:37:28.869362 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:37:28.869370 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:37:28.869378 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:37:28.869386 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:37:28.869393 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:37:28.869575 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:37:28.869719 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:37:28.869837 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:37:28.869847 kernel: PCI host bridge to bus 0000:00 Dec 12 18:37:28.869966 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:37:28.870074 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:37:28.870197 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:37:28.870312 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 12 18:37:28.870418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:37:28.870523 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 12 18:37:28.870648 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:37:28.870786 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:37:28.870914 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:37:28.871036 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:37:28.871152 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:37:28.871293 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:37:28.871409 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:37:28.871536 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:37:28.871678 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Dec 12 18:37:28.871798 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:37:28.871920 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:37:28.872053 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:37:28.872187 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:37:28.872309 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:37:28.872425 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:37:28.872551 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:37:28.872689 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Dec 12 18:37:28.872813 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:37:28.872930 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 12 18:37:28.873046 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:37:28.873217 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:37:28.873340 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:37:28.873463 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:37:28.873578 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Dec 12 18:37:28.873721 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Dec 12 18:37:28.873844 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:37:28.873960 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:37:28.873971 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:37:28.873979 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:37:28.873987 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:37:28.873995 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:37:28.874006 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:37:28.874014 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:37:28.874022 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:37:28.874030 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:37:28.874038 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:37:28.874046 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:37:28.874054 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:37:28.874062 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:37:28.874070 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:37:28.874080 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:37:28.874088 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:37:28.874096 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:37:28.874104 kernel: iommu: Default domain type: Translated Dec 12 18:37:28.874112 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:37:28.874119 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:37:28.874127 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:37:28.874135 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:37:28.874143 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 12 18:37:28.874285 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:37:28.874402 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:37:28.874518 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:37:28.874528 kernel: vgaarb: loaded Dec 12 18:37:28.874537 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:37:28.874545 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:37:28.874553 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:37:28.874561 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:37:28.874569 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:37:28.874580 kernel: pnp: PnP ACPI init Dec 12 18:37:28.874743 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:37:28.874759 kernel: pnp: PnP ACPI: found 6 devices Dec 12 18:37:28.874769 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:37:28.874779 kernel: NET: Registered PF_INET protocol family Dec 12 18:37:28.874790 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:37:28.874800 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:37:28.874810 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:37:28.874825 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:37:28.874835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:37:28.874846 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:37:28.874856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:37:28.874867 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:37:28.874878 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:37:28.874888 kernel: NET: Registered PF_XDP protocol family Dec 12 18:37:28.875027 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:37:28.875186 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:37:28.875330 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:37:28.875459 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 12 18:37:28.875592 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:37:28.875736 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 12 18:37:28.875753 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:37:28.875764 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 12 18:37:28.875775 kernel: Initialise system trusted keyrings Dec 12 18:37:28.875786 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:37:28.875802 kernel: Key type asymmetric registered Dec 12 18:37:28.875812 kernel: Asymmetric key parser 'x509' registered Dec 12 18:37:28.875822 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:37:28.875833 kernel: io scheduler mq-deadline registered Dec 12 18:37:28.875844 kernel: io scheduler kyber registered Dec 12 18:37:28.875855 kernel: io scheduler bfq registered Dec 12 18:37:28.875866 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:37:28.875877 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:37:28.875887 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:37:28.875901 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:37:28.875911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:37:28.875921 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:37:28.875931 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:37:28.875941 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:37:28.875951 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:37:28.875961 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:37:28.876112 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 12 18:37:28.876277 kernel: rtc_cmos 00:04: registered as rtc0 Dec 12 18:37:28.876422 kernel: rtc_cmos 00:04: setting system clock to 2025-12-12T18:37:28 UTC (1765564648) Dec 12 18:37:28.876562 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:37:28.876578 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:37:28.876600 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:37:28.876611 kernel: Segment Routing with IPv6 Dec 12 18:37:28.876622 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:37:28.876632 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:37:28.876643 kernel: Key type dns_resolver registered Dec 12 18:37:28.876657 kernel: IPI shorthand broadcast: enabled Dec 12 18:37:28.876668 kernel: sched_clock: Marking stable (2941002011, 210253017)->(3201486870, -50231842) Dec 12 18:37:28.876679 kernel: registered taskstats version 1 Dec 12 18:37:28.876690 kernel: Loading compiled-in X.509 certificates Dec 12 18:37:28.876701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:37:28.876712 kernel: Demotion targets for Node 0: null Dec 12 18:37:28.876722 kernel: Key type .fscrypt registered Dec 12 18:37:28.876732 kernel: Key type fscrypt-provisioning registered Dec 12 18:37:28.876743 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:37:28.876757 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:37:28.876767 kernel: ima: No architecture policies found Dec 12 18:37:28.876778 kernel: clk: Disabling unused clocks Dec 12 18:37:28.876788 kernel: Warning: unable to open an initial console. Dec 12 18:37:28.876799 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:37:28.876809 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:37:28.876820 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:37:28.876830 kernel: Run /init as init process Dec 12 18:37:28.876841 kernel: with arguments: Dec 12 18:37:28.876855 kernel: /init Dec 12 18:37:28.876872 kernel: with environment: Dec 12 18:37:28.876882 kernel: HOME=/ Dec 12 18:37:28.876892 kernel: TERM=linux Dec 12 18:37:28.876904 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:37:28.876920 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:37:28.876949 systemd[1]: Detected virtualization kvm. Dec 12 18:37:28.876961 systemd[1]: Detected architecture x86-64. Dec 12 18:37:28.876972 systemd[1]: Running in initrd. Dec 12 18:37:28.876983 systemd[1]: No hostname configured, using default hostname. Dec 12 18:37:28.876995 systemd[1]: Hostname set to . Dec 12 18:37:28.877020 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:37:28.877032 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:37:28.877043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:37:28.877058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:37:28.877071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:37:28.877082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:37:28.877094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:37:28.877106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:37:28.877119 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:37:28.877134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:37:28.877145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:37:28.877175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:37:28.877189 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:37:28.877201 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:37:28.877212 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:37:28.877224 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:37:28.877240 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:37:28.877252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:37:28.877266 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:37:28.877278 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:37:28.877290 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:37:28.877302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:37:28.877313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:37:28.877325 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:37:28.877336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:37:28.877351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:37:28.877363 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:37:28.877375 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:37:28.877392 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:37:28.877404 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:37:28.877415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:37:28.877427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:37:28.877442 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:37:28.877454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:37:28.877465 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:37:28.877477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:37:28.877520 systemd-journald[201]: Collecting audit messages is disabled. Dec 12 18:37:28.877548 systemd-journald[201]: Journal started Dec 12 18:37:28.877577 systemd-journald[201]: Runtime Journal (/run/log/journal/f3f7a35b2afd45a09bf871c5d8114fe3) is 6M, max 48.3M, 42.2M free. Dec 12 18:37:28.863498 systemd-modules-load[202]: Inserted module 'overlay' Dec 12 18:37:28.879885 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:37:28.884780 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:37:28.896201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:37:28.898762 systemd-modules-load[202]: Inserted module 'br_netfilter' Dec 12 18:37:28.966323 kernel: Bridge firewalling registered Dec 12 18:37:28.902354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:37:28.908741 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:37:28.967683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:37:28.971756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:37:28.976136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:37:28.982536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:37:28.984232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:37:28.998949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:37:29.011018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:37:29.011982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:37:29.014682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:37:29.029399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:37:29.031012 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:37:29.057612 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:37:29.072279 systemd-resolved[239]: Positive Trust Anchors: Dec 12 18:37:29.072292 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:37:29.072328 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:37:29.075266 systemd-resolved[239]: Defaulting to hostname 'linux'. Dec 12 18:37:29.076462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:37:29.077121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:37:29.166197 kernel: SCSI subsystem initialized Dec 12 18:37:29.175182 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:37:29.186183 kernel: iscsi: registered transport (tcp) Dec 12 18:37:29.207684 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:37:29.207737 kernel: QLogic iSCSI HBA Driver Dec 12 18:37:29.226248 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:37:29.252410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:37:29.254098 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:37:29.306342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:37:29.310997 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:37:29.365184 kernel: raid6: avx2x4 gen() 30300 MB/s Dec 12 18:37:29.382182 kernel: raid6: avx2x2 gen() 31145 MB/s Dec 12 18:37:29.399929 kernel: raid6: avx2x1 gen() 25543 MB/s Dec 12 18:37:29.399957 kernel: raid6: using algorithm avx2x2 gen() 31145 MB/s Dec 12 18:37:29.417945 kernel: raid6: .... xor() 19830 MB/s, rmw enabled Dec 12 18:37:29.417988 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:37:29.439183 kernel: xor: automatically using best checksumming function avx Dec 12 18:37:29.598193 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:37:29.606762 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:37:29.609216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:37:29.646365 systemd-udevd[456]: Using default interface naming scheme 'v255'. Dec 12 18:37:29.652100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:37:29.653441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:37:29.675405 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Dec 12 18:37:29.703697 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:37:29.708782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:37:29.781095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:37:29.787945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:37:29.818213 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 18:37:29.825197 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 18:37:29.833039 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:37:29.833062 kernel: GPT:9289727 != 19775487 Dec 12 18:37:29.833073 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:37:29.833084 kernel: GPT:9289727 != 19775487 Dec 12 18:37:29.833093 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:37:29.833109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:37:29.841712 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:37:29.846174 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:37:29.867189 kernel: AES CTR mode by8 optimization enabled Dec 12 18:37:29.867230 kernel: libata version 3.00 loaded. Dec 12 18:37:29.870388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:37:29.870508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:37:29.881713 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:37:29.890721 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:37:29.892834 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:37:29.888651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:37:29.899644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:37:29.909636 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:37:29.913001 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:37:29.913215 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:37:29.924046 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:37:29.932475 kernel: scsi host0: ahci Dec 12 18:37:29.932665 kernel: scsi host1: ahci Dec 12 18:37:29.932806 kernel: scsi host2: ahci Dec 12 18:37:29.932942 kernel: scsi host3: ahci Dec 12 18:37:29.933090 kernel: scsi host4: ahci Dec 12 18:37:29.933260 kernel: scsi host5: ahci Dec 12 18:37:29.933426 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Dec 12 18:37:29.933438 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Dec 12 18:37:29.933448 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Dec 12 18:37:29.935458 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Dec 12 18:37:29.935514 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Dec 12 18:37:29.940225 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Dec 12 18:37:29.961920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:37:30.024912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:37:30.034076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:37:30.036087 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:37:30.049556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:37:30.053473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:37:30.074561 disk-uuid[616]: Primary Header is updated. Dec 12 18:37:30.074561 disk-uuid[616]: Secondary Entries is updated. Dec 12 18:37:30.074561 disk-uuid[616]: Secondary Header is updated. Dec 12 18:37:30.081190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:37:30.085186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:37:30.249915 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:37:30.250015 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:37:30.251202 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:37:30.253198 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:37:30.253212 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:37:30.256580 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 12 18:37:30.256604 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:37:30.256615 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 12 18:37:30.257646 kernel: ata3.00: applying bridge limits Dec 12 18:37:30.259692 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:37:30.259723 kernel: ata3.00: configured for UDMA/100 Dec 12 18:37:30.261185 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 12 18:37:30.333702 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 12 18:37:30.334013 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:37:30.354248 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 12 18:37:30.694313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:37:30.697124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:37:30.711654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:37:30.720262 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:37:30.731089 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:37:30.781701 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:37:31.099800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:37:31.104129 disk-uuid[617]: The operation has completed successfully. Dec 12 18:37:31.174739 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:37:31.174909 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:37:31.261327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:37:31.284143 sh[646]: Success Dec 12 18:37:31.329267 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:37:31.329365 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:37:31.336298 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:37:31.380621 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:37:31.445665 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:37:31.468606 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:37:31.526711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:37:31.559096 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (658) Dec 12 18:37:31.559190 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:37:31.559209 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:37:31.585124 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:37:31.585236 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:37:31.592617 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:37:31.595911 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:37:31.606874 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:37:31.610395 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:37:31.625884 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:37:31.746355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (699) Dec 12 18:37:31.752703 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:37:31.752766 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:37:31.772719 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:37:31.772798 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:37:31.796985 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:37:31.819043 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:37:31.825204 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:37:31.992675 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:37:32.003358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:37:32.108977 ignition[764]: Ignition 2.22.0 Dec 12 18:37:32.108994 ignition[764]: Stage: fetch-offline Dec 12 18:37:32.109048 ignition[764]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:32.109061 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:32.114352 systemd-networkd[827]: lo: Link UP Dec 12 18:37:32.109205 ignition[764]: parsed url from cmdline: "" Dec 12 18:37:32.114359 systemd-networkd[827]: lo: Gained carrier Dec 12 18:37:32.109211 ignition[764]: no config URL provided Dec 12 18:37:32.116398 systemd-networkd[827]: Enumeration completed Dec 12 18:37:32.109218 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:37:32.116551 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:37:32.109230 ignition[764]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:37:32.117827 systemd[1]: Reached target network.target - Network. Dec 12 18:37:32.109262 ignition[764]: op(1): [started] loading QEMU firmware config module Dec 12 18:37:32.119647 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:37:32.109269 ignition[764]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 18:37:32.119655 systemd-networkd[827]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:37:32.120810 systemd-networkd[827]: eth0: Link UP Dec 12 18:37:32.121827 systemd-networkd[827]: eth0: Gained carrier Dec 12 18:37:32.121840 systemd-networkd[827]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:37:32.161844 systemd-networkd[827]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:37:32.183716 ignition[764]: op(1): [finished] loading QEMU firmware config module Dec 12 18:37:32.186707 ignition[764]: parsing config with SHA512: 4a5eb520d92af41b037cd15bebcea32b1cf0d4ef0bae71e1773c7d0a6b2c645e10225721288cd1a8d2b9f39bb87873402316fb5c79abd624b7d72397b61a88e4 Dec 12 18:37:32.196304 unknown[764]: fetched base config from "system" Dec 12 18:37:32.196325 unknown[764]: fetched user config from "qemu" Dec 12 18:37:32.197946 ignition[764]: fetch-offline: fetch-offline passed Dec 12 18:37:32.198051 ignition[764]: Ignition finished successfully Dec 12 18:37:32.212587 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:37:32.217713 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 18:37:32.239482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:37:32.337516 ignition[844]: Ignition 2.22.0 Dec 12 18:37:32.337534 ignition[844]: Stage: kargs Dec 12 18:37:32.337731 ignition[844]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:32.337746 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:32.338551 ignition[844]: kargs: kargs passed Dec 12 18:37:32.338607 ignition[844]: Ignition finished successfully Dec 12 18:37:32.357626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:37:32.364690 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:37:32.426796 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.71 Dec 12 18:37:32.426817 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Dec 12 18:37:32.442649 ignition[851]: Ignition 2.22.0 Dec 12 18:37:32.442670 ignition[851]: Stage: disks Dec 12 18:37:32.442842 ignition[851]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:32.442854 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:32.446712 ignition[851]: disks: disks passed Dec 12 18:37:32.446797 ignition[851]: Ignition finished successfully Dec 12 18:37:32.469401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:37:32.474409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:37:32.478031 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:37:32.486229 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:37:32.486782 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:37:32.487418 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:37:32.496071 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:37:32.552265 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:37:32.569512 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:37:32.584785 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:37:32.910539 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:37:32.912641 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:37:32.924940 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:37:32.946633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:37:32.949387 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:37:32.961363 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:37:32.961520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:37:32.961566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:37:32.983137 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:37:32.987141 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:37:33.013225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Dec 12 18:37:33.021537 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:37:33.021618 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:37:33.041189 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:37:33.041316 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:37:33.048556 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:37:33.118211 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:37:33.140700 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:37:33.150674 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:37:33.163388 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:37:33.428538 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:37:33.432277 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:37:33.460964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:37:33.472783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:37:33.479538 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:37:33.527972 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:37:33.558290 ignition[983]: INFO : Ignition 2.22.0 Dec 12 18:37:33.561633 ignition[983]: INFO : Stage: mount Dec 12 18:37:33.561633 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:33.561633 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:33.561633 ignition[983]: INFO : mount: mount passed Dec 12 18:37:33.561633 ignition[983]: INFO : Ignition finished successfully Dec 12 18:37:33.567092 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:37:33.577344 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:37:33.697542 systemd-networkd[827]: eth0: Gained IPv6LL Dec 12 18:37:33.924657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:37:33.965513 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Dec 12 18:37:33.971559 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:37:33.971621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:37:34.000347 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:37:34.000433 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:37:34.011484 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:37:34.145275 ignition[1012]: INFO : Ignition 2.22.0 Dec 12 18:37:34.145275 ignition[1012]: INFO : Stage: files Dec 12 18:37:34.145275 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:34.145275 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:34.159258 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:37:34.164708 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:37:34.164708 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:37:34.181719 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:37:34.181719 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:37:34.192092 unknown[1012]: wrote ssh authorized keys file for user: core Dec 12 18:37:34.195859 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:37:34.200622 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:37:34.208674 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:37:34.221768 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:37:34.560379 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 12 18:37:35.219945 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:37:35.219945 ignition[1012]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 12 18:37:35.230890 ignition[1012]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:37:35.249134 ignition[1012]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:37:35.249134 ignition[1012]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 12 18:37:35.249134 ignition[1012]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 18:37:35.368084 ignition[1012]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:37:35.382129 ignition[1012]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:37:35.382129 ignition[1012]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 18:37:35.398792 ignition[1012]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:37:35.398792 ignition[1012]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:37:35.398792 ignition[1012]: INFO : files: files passed Dec 12 18:37:35.398792 ignition[1012]: INFO : Ignition finished successfully Dec 12 18:37:35.395833 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:37:35.402780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:37:35.413612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:37:35.442319 initrd-setup-root-after-ignition[1039]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 18:37:35.456473 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:37:35.463758 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:37:35.467176 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:37:35.475032 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:37:35.479547 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:37:35.483909 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:37:35.533836 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:37:35.534061 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:37:35.602309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:37:35.602491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:37:35.611991 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:37:35.623973 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:37:35.645775 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:37:35.649584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:37:35.756584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:37:35.781701 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:37:35.888807 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:37:35.891597 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:37:35.894221 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:37:35.899929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:37:35.900106 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:37:35.909241 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:37:35.914732 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:37:35.921875 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:37:35.935742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:37:35.941689 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:37:35.945255 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:37:35.953140 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:37:35.978261 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:37:36.004567 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:37:36.008462 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:37:36.013528 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:37:36.016856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:37:36.017092 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:37:36.023315 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:37:36.027018 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:37:36.039416 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:37:36.040310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:37:36.077214 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:37:36.077412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:37:36.077689 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:37:36.077828 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:37:36.078023 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:37:36.078107 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:37:36.085682 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:37:36.113209 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:37:36.118047 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:37:36.173345 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:37:36.176995 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:37:36.187184 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:37:36.187336 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:37:36.189870 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:37:36.190039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:37:36.208913 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:37:36.209093 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:37:36.216289 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:37:36.219363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:37:36.219586 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:37:36.281692 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:37:36.285748 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:37:36.285938 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:37:36.305818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:37:36.306011 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:37:36.317149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:37:36.317341 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:37:36.359212 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:37:36.393439 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:37:36.393615 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:37:36.403046 ignition[1067]: INFO : Ignition 2.22.0 Dec 12 18:37:36.403046 ignition[1067]: INFO : Stage: umount Dec 12 18:37:36.403046 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:37:36.403046 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:37:36.403046 ignition[1067]: INFO : umount: umount passed Dec 12 18:37:36.403046 ignition[1067]: INFO : Ignition finished successfully Dec 12 18:37:36.414415 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:37:36.414642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:37:36.420363 systemd[1]: Stopped target network.target - Network. Dec 12 18:37:36.427543 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:37:36.427692 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:37:36.433569 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:37:36.433688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:37:36.442252 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:37:36.442388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:37:36.456436 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:37:36.456530 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:37:36.460565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:37:36.460652 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:37:36.473703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:37:36.481561 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:37:36.503622 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:37:36.503785 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:37:36.533411 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:37:36.533758 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:37:36.533961 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:37:36.541059 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:37:36.542454 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:37:36.545266 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:37:36.545334 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:37:36.553616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:37:36.561799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:37:36.561912 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:37:36.566007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:37:36.566093 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:37:36.586458 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:37:36.586543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:37:36.589709 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:37:36.589775 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:37:36.602414 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:37:36.610047 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:37:36.610198 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:37:36.632283 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:37:36.632573 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:37:36.653770 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:37:36.654038 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:37:36.663322 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:37:36.663433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:37:36.681528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:37:36.683096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:37:36.684938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:37:36.685025 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:37:36.689716 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:37:36.689797 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:37:36.697135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:37:36.697244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:37:36.721499 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:37:36.734236 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:37:36.734361 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:37:36.744725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:37:36.744809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:37:36.756579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:37:36.756690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:37:36.773309 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:37:36.776831 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:37:36.776935 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:37:36.799840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:37:36.801539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:37:36.803945 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:37:36.811967 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:37:36.874009 systemd[1]: Switching root. Dec 12 18:37:36.941568 systemd-journald[201]: Journal stopped Dec 12 18:37:39.094728 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 12 18:37:39.094791 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:37:39.094813 kernel: SELinux: policy capability open_perms=1 Dec 12 18:37:39.094828 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:37:39.094842 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:37:39.094854 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:37:39.094865 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:37:39.094876 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:37:39.094889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:37:39.094900 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:37:39.094911 kernel: audit: type=1403 audit(1765564657.619:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:37:39.094923 systemd[1]: Successfully loaded SELinux policy in 147.884ms. Dec 12 18:37:39.094950 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.605ms. Dec 12 18:37:39.094964 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:37:39.094976 systemd[1]: Detected virtualization kvm. Dec 12 18:37:39.094988 systemd[1]: Detected architecture x86-64. Dec 12 18:37:39.095000 systemd[1]: Detected first boot. Dec 12 18:37:39.095014 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:37:39.095026 zram_generator::config[1112]: No configuration found. Dec 12 18:37:39.095039 kernel: Guest personality initialized and is inactive Dec 12 18:37:39.095050 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:37:39.095061 kernel: Initialized host personality Dec 12 18:37:39.095074 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:37:39.095085 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:37:39.095102 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:37:39.095116 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:37:39.095127 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:37:39.095139 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:37:39.095154 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:37:39.095201 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:37:39.095215 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:37:39.095226 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:37:39.095238 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:37:39.095250 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:37:39.095266 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:37:39.095277 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:37:39.095289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:37:39.095301 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:37:39.095323 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:37:39.095336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:37:39.095348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:37:39.095362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:37:39.095375 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:37:39.095387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:37:39.095399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:37:39.095410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:37:39.095422 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:37:39.095434 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:37:39.095446 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:37:39.095458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:37:39.095476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:37:39.095488 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:37:39.095500 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:37:39.095513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:37:39.095525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:37:39.095536 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:37:39.095548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:37:39.095560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:37:39.095572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:37:39.095583 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:37:39.095597 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:37:39.095608 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:37:39.095620 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:37:39.095632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:39.095643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:37:39.095655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:37:39.095667 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:37:39.095679 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:37:39.095693 systemd[1]: Reached target machines.target - Containers. Dec 12 18:37:39.095705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:37:39.095716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:37:39.095728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:37:39.095740 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:37:39.095752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:37:39.095765 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:37:39.095777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:37:39.095791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:37:39.095803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:37:39.095815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:37:39.095827 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:37:39.095839 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:37:39.095851 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:37:39.095862 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:37:39.095875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:37:39.095889 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:37:39.095900 kernel: fuse: init (API version 7.41) Dec 12 18:37:39.095911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:37:39.095922 kernel: loop: module loaded Dec 12 18:37:39.095934 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:37:39.095947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:37:39.095959 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:37:39.095970 kernel: ACPI: bus type drm_connector registered Dec 12 18:37:39.095982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:37:39.095996 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:37:39.096008 systemd[1]: Stopped verity-setup.service. Dec 12 18:37:39.096022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:39.096034 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:37:39.096046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:37:39.096081 systemd-journald[1190]: Collecting audit messages is disabled. Dec 12 18:37:39.096106 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:37:39.096118 systemd-journald[1190]: Journal started Dec 12 18:37:39.096142 systemd-journald[1190]: Runtime Journal (/run/log/journal/f3f7a35b2afd45a09bf871c5d8114fe3) is 6M, max 48.3M, 42.2M free. Dec 12 18:37:38.711587 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:37:38.738708 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:37:38.739371 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:37:39.100284 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:37:39.102536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:37:39.104659 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:37:39.106526 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:37:39.108369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:37:39.110572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:37:39.112844 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:37:39.113071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:37:39.115235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:37:39.115495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:37:39.117618 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:37:39.117843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:37:39.119820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:37:39.120048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:37:39.122274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:37:39.122523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:37:39.124552 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:37:39.124780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:37:39.126830 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:37:39.128968 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:37:39.131355 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:37:39.133618 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:37:39.146638 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:37:39.149702 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:37:39.152656 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:37:39.154492 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:37:39.154598 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:37:39.157522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:37:39.167347 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:37:39.170497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:37:39.171977 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:37:39.174942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:37:39.177131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:37:39.179290 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:37:39.181065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:37:39.182263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:37:39.186284 systemd-journald[1190]: Time spent on flushing to /var/log/journal/f3f7a35b2afd45a09bf871c5d8114fe3 is 18.316ms for 966 entries. Dec 12 18:37:39.186284 systemd-journald[1190]: System Journal (/var/log/journal/f3f7a35b2afd45a09bf871c5d8114fe3) is 8M, max 195.6M, 187.6M free. Dec 12 18:37:39.246565 systemd-journald[1190]: Received client request to flush runtime journal. Dec 12 18:37:39.246612 kernel: loop0: detected capacity change from 0 to 224512 Dec 12 18:37:39.246634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:37:39.189410 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:37:39.195143 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:37:39.201461 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:37:39.203917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:37:39.206315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:37:39.218209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:37:39.222937 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:37:39.228334 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:37:39.233336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:37:39.249443 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:37:39.266536 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:37:39.270205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:37:39.271197 kernel: loop1: detected capacity change from 0 to 110984 Dec 12 18:37:39.278182 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:37:39.306446 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:37:39.305428 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Dec 12 18:37:39.305446 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Dec 12 18:37:39.312634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:37:39.332184 kernel: loop3: detected capacity change from 0 to 224512 Dec 12 18:37:39.342191 kernel: loop4: detected capacity change from 0 to 110984 Dec 12 18:37:39.352193 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:37:39.359828 (sd-merge)[1255]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 18:37:39.360607 (sd-merge)[1255]: Merged extensions into '/usr'. Dec 12 18:37:39.365372 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:37:39.365396 systemd[1]: Reloading... Dec 12 18:37:39.504227 zram_generator::config[1284]: No configuration found. Dec 12 18:37:39.570936 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:37:39.685558 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:37:39.686079 systemd[1]: Reloading finished in 320 ms. Dec 12 18:37:39.717022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:37:39.759934 systemd[1]: Starting ensure-sysext.service... Dec 12 18:37:39.762407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:37:39.772702 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:37:39.772715 systemd[1]: Reloading... Dec 12 18:37:39.783883 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:37:39.784057 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:37:39.784490 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:37:39.784859 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:37:39.786030 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:37:39.786539 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:37:39.786647 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 12 18:37:39.820192 zram_generator::config[1346]: No configuration found. Dec 12 18:37:39.996058 systemd[1]: Reloading finished in 223 ms. Dec 12 18:37:40.045771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.045939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:37:40.047138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:37:40.049809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:37:40.052573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:37:40.054482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:37:40.054619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:37:40.054745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.057091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.057362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:37:40.057585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:37:40.057702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:37:40.057815 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.061109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:37:40.061342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:37:40.063603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:37:40.063798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:37:40.066087 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:37:40.066303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:37:40.072319 systemd[1]: Finished ensure-sysext.service. Dec 12 18:37:40.074349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.074564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:37:40.075701 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:37:40.077355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:37:40.077400 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:37:40.077456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:37:40.077503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:37:40.077553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:37:40.091864 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:37:40.092093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:37:40.105233 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:37:40.105241 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:37:40.105444 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:37:40.116989 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:37:40.117006 systemd-tmpfiles[1318]: Skipping /boot Dec 12 18:37:40.575337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:37:40.580644 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:37:40.583695 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:37:40.593528 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:37:40.600014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:37:40.606031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:37:40.609870 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:37:40.614944 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:37:40.627382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:37:40.631664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:37:40.634240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:37:40.651328 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:37:40.657256 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:37:40.664265 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:37:40.667468 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:37:40.669223 augenrules[1427]: No rules Dec 12 18:37:40.670641 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:37:40.670898 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:37:40.673749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:37:40.684796 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Dec 12 18:37:40.701805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:37:40.709094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:37:40.720186 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:37:40.764306 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:37:40.824575 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:37:40.824720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:37:40.830073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:37:40.845241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:37:40.862220 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:37:40.864131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:37:40.890431 systemd-resolved[1397]: Positive Trust Anchors: Dec 12 18:37:40.890452 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:37:40.890489 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:37:40.901294 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:37:40.901654 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:37:40.901979 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:37:40.905367 systemd-resolved[1397]: Defaulting to hostname 'linux'. Dec 12 18:37:40.905839 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:37:40.906643 systemd-networkd[1439]: lo: Link UP Dec 12 18:37:40.906658 systemd-networkd[1439]: lo: Gained carrier Dec 12 18:37:40.908317 systemd-networkd[1439]: Enumeration completed Dec 12 18:37:40.908410 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:37:40.908703 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:37:40.908707 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:37:40.909368 systemd-networkd[1439]: eth0: Link UP Dec 12 18:37:40.909538 systemd-networkd[1439]: eth0: Gained carrier Dec 12 18:37:40.909552 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:37:40.914784 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:37:40.919658 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:37:40.921678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:37:40.923015 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:37:40.923794 systemd[1]: Reached target network.target - Network. Dec 12 18:37:40.924680 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Dec 12 18:37:40.925305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:37:40.927236 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:37:40.929120 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:37:40.931362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:37:40.933641 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:37:40.935829 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:37:40.938140 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 18:37:40.938235 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2025-12-12 18:37:40.596196 UTC. Dec 12 18:37:40.938416 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:37:40.940593 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:37:40.943233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:37:40.943288 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:37:40.944840 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:37:40.948630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:37:40.954409 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:37:40.960500 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:37:40.962709 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:37:40.965202 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:37:40.971756 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:37:40.974892 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:37:40.977951 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:37:40.981508 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:37:40.993876 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:37:40.995619 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:37:40.998315 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:37:40.998411 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:37:41.000400 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:37:41.004390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:37:41.009486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:37:41.012410 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:37:41.016658 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:37:41.020221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:37:41.025443 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:37:41.028235 jq[1502]: false Dec 12 18:37:41.029599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:37:41.033298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:37:41.037572 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing passwd entry cache Dec 12 18:37:41.037576 oslogin_cache_refresh[1504]: Refreshing passwd entry cache Dec 12 18:37:41.038346 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:37:41.041237 extend-filesystems[1503]: Found /dev/vda6 Dec 12 18:37:41.046287 extend-filesystems[1503]: Found /dev/vda9 Dec 12 18:37:41.046287 extend-filesystems[1503]: Checking size of /dev/vda9 Dec 12 18:37:41.044258 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:37:41.046719 oslogin_cache_refresh[1504]: Failure getting users, quitting Dec 12 18:37:41.053731 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting users, quitting Dec 12 18:37:41.053731 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:37:41.053731 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Refreshing group entry cache Dec 12 18:37:41.047808 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:37:41.046740 oslogin_cache_refresh[1504]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:37:41.046800 oslogin_cache_refresh[1504]: Refreshing group entry cache Dec 12 18:37:41.054385 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Failure getting groups, quitting Dec 12 18:37:41.054385 google_oslogin_nss_cache[1504]: oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:37:41.054377 oslogin_cache_refresh[1504]: Failure getting groups, quitting Dec 12 18:37:41.054387 oslogin_cache_refresh[1504]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:37:41.056524 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:37:41.060447 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:37:41.080579 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:37:41.083660 extend-filesystems[1503]: Resized partition /dev/vda9 Dec 12 18:37:41.088927 extend-filesystems[1531]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:37:41.089610 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:37:41.094920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:37:41.095417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:37:41.095820 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:37:41.096068 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:37:41.098560 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:37:41.098824 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:37:41.101471 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:37:41.101732 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:37:41.103368 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 18:37:41.120691 update_engine[1519]: I20251212 18:37:41.120252 1519 main.cc:92] Flatcar Update Engine starting Dec 12 18:37:41.125279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:37:41.126874 jq[1527]: true Dec 12 18:37:41.128421 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:37:41.142322 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 18:37:41.156187 jq[1543]: true Dec 12 18:37:41.164685 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:37:41.164685 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 18:37:41.164685 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 18:37:41.174270 extend-filesystems[1503]: Resized filesystem in /dev/vda9 Dec 12 18:37:41.171972 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:37:41.173200 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:37:41.177783 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:37:41.177819 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:37:41.179515 dbus-daemon[1499]: [system] SELinux support is enabled Dec 12 18:37:41.187111 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:37:41.187837 systemd-logind[1514]: New seat seat0. Dec 12 18:37:41.198974 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:37:41.199829 update_engine[1519]: I20251212 18:37:41.199779 1519 update_check_scheduler.cc:74] Next update check in 3m24s Dec 12 18:37:41.211897 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 18:37:41.213369 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:37:41.213397 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:37:41.214181 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:37:41.214203 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:37:41.214536 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:37:41.219660 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:37:41.240859 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:37:41.241751 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:37:41.251333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:37:41.261547 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:37:41.265546 kernel: kvm_amd: TSC scaling supported Dec 12 18:37:41.265598 kernel: kvm_amd: Nested Virtualization enabled Dec 12 18:37:41.265611 kernel: kvm_amd: Nested Paging enabled Dec 12 18:37:41.267930 kernel: kvm_amd: LBR virtualization supported Dec 12 18:37:41.267963 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 12 18:37:41.267977 kernel: kvm_amd: Virtual GIF supported Dec 12 18:37:41.298235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:37:41.318187 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:37:41.318979 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:37:41.374009 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:37:41.380759 containerd[1534]: time="2025-12-12T18:37:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:37:41.381329 containerd[1534]: time="2025-12-12T18:37:41.381281120Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:37:41.391461 containerd[1534]: time="2025-12-12T18:37:41.391412002Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.866µs" Dec 12 18:37:41.391543 containerd[1534]: time="2025-12-12T18:37:41.391528960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:37:41.391638 containerd[1534]: time="2025-12-12T18:37:41.391620273Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:37:41.391923 containerd[1534]: time="2025-12-12T18:37:41.391902218Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:37:41.391993 containerd[1534]: time="2025-12-12T18:37:41.391976699Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:37:41.392070 containerd[1534]: time="2025-12-12T18:37:41.392055765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392211 containerd[1534]: time="2025-12-12T18:37:41.392188807Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392290 containerd[1534]: time="2025-12-12T18:37:41.392273789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392630 containerd[1534]: time="2025-12-12T18:37:41.392611438Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392681 containerd[1534]: time="2025-12-12T18:37:41.392670286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392735 containerd[1534]: time="2025-12-12T18:37:41.392712772Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392777 containerd[1534]: time="2025-12-12T18:37:41.392766538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:37:41.392916 containerd[1534]: time="2025-12-12T18:37:41.392902207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:37:41.393232 containerd[1534]: time="2025-12-12T18:37:41.393214440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:37:41.393326 containerd[1534]: time="2025-12-12T18:37:41.393302070Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:37:41.393404 containerd[1534]: time="2025-12-12T18:37:41.393383601Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:37:41.393516 containerd[1534]: time="2025-12-12T18:37:41.393496865Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:37:41.393862 containerd[1534]: time="2025-12-12T18:37:41.393840863Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:37:41.393989 containerd[1534]: time="2025-12-12T18:37:41.393972523Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:37:41.402634 containerd[1534]: time="2025-12-12T18:37:41.402598441Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:37:41.402675 containerd[1534]: time="2025-12-12T18:37:41.402645359Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:37:41.402675 containerd[1534]: time="2025-12-12T18:37:41.402666333Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:37:41.402711 containerd[1534]: time="2025-12-12T18:37:41.402677650Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:37:41.402711 containerd[1534]: time="2025-12-12T18:37:41.402690023Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:37:41.402711 containerd[1534]: time="2025-12-12T18:37:41.402700658Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402717778Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402733286Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402751911Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402761013Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402770516Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:37:41.402831 containerd[1534]: time="2025-12-12T18:37:41.402783396Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:37:41.402934 containerd[1534]: time="2025-12-12T18:37:41.402900287Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:37:41.402934 containerd[1534]: time="2025-12-12T18:37:41.402918424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:37:41.402934 containerd[1534]: time="2025-12-12T18:37:41.402931179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:37:41.402989 containerd[1534]: time="2025-12-12T18:37:41.402942823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:37:41.402989 containerd[1534]: time="2025-12-12T18:37:41.402954225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:37:41.402989 containerd[1534]: time="2025-12-12T18:37:41.402966060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:37:41.402989 containerd[1534]: time="2025-12-12T18:37:41.402977157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:37:41.402989 containerd[1534]: time="2025-12-12T18:37:41.402986987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:37:41.403080 containerd[1534]: time="2025-12-12T18:37:41.402998343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:37:41.403080 containerd[1534]: time="2025-12-12T18:37:41.403009506Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:37:41.403080 containerd[1534]: time="2025-12-12T18:37:41.403020746Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:37:41.403080 containerd[1534]: time="2025-12-12T18:37:41.403064892Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:37:41.403080 containerd[1534]: time="2025-12-12T18:37:41.403077743Z" level=info msg="Start snapshots syncer" Dec 12 18:37:41.403198 containerd[1534]: time="2025-12-12T18:37:41.403101500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:37:41.403443 containerd[1534]: time="2025-12-12T18:37:41.403385536Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:37:41.403546 containerd[1534]: time="2025-12-12T18:37:41.403447357Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:37:41.403546 containerd[1534]: time="2025-12-12T18:37:41.403490535Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:37:41.403604 containerd[1534]: time="2025-12-12T18:37:41.403586671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:37:41.403630 containerd[1534]: time="2025-12-12T18:37:41.403605776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:37:41.403630 containerd[1534]: time="2025-12-12T18:37:41.403615769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:37:41.403669 containerd[1534]: time="2025-12-12T18:37:41.403625658Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:37:41.403669 containerd[1534]: time="2025-12-12T18:37:41.403647812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:37:41.403669 containerd[1534]: time="2025-12-12T18:37:41.403657997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:37:41.403669 containerd[1534]: time="2025-12-12T18:37:41.403667396Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:37:41.403766 containerd[1534]: time="2025-12-12T18:37:41.403690816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:37:41.403766 containerd[1534]: time="2025-12-12T18:37:41.403701088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:37:41.403833 containerd[1534]: time="2025-12-12T18:37:41.403811793Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:37:41.403880 containerd[1534]: time="2025-12-12T18:37:41.403858058Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:37:41.403943 containerd[1534]: time="2025-12-12T18:37:41.403919189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:37:41.403943 containerd[1534]: time="2025-12-12T18:37:41.403938121Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:37:41.404002 containerd[1534]: time="2025-12-12T18:37:41.403954827Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:37:41.404002 containerd[1534]: time="2025-12-12T18:37:41.403964093Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:37:41.404002 containerd[1534]: time="2025-12-12T18:37:41.403978392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:37:41.404002 containerd[1534]: time="2025-12-12T18:37:41.403998753Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:37:41.404093 containerd[1534]: time="2025-12-12T18:37:41.404019478Z" level=info msg="runtime interface created" Dec 12 18:37:41.404093 containerd[1534]: time="2025-12-12T18:37:41.404027879Z" level=info msg="created NRI interface" Dec 12 18:37:41.404093 containerd[1534]: time="2025-12-12T18:37:41.404036742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:37:41.404093 containerd[1534]: time="2025-12-12T18:37:41.404077521Z" level=info msg="Connect containerd service" Dec 12 18:37:41.404232 containerd[1534]: time="2025-12-12T18:37:41.404103837Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:37:41.405015 containerd[1534]: time="2025-12-12T18:37:41.404974112Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:37:41.417839 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:37:41.418094 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:37:41.438534 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:37:41.459420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:37:41.468472 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:37:41.472532 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:37:41.475928 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:37:41.478115 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:37:41.488965 containerd[1534]: time="2025-12-12T18:37:41.488911986Z" level=info msg="Start subscribing containerd event" Dec 12 18:37:41.489030 containerd[1534]: time="2025-12-12T18:37:41.488946282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:37:41.489030 containerd[1534]: time="2025-12-12T18:37:41.488980454Z" level=info msg="Start recovering state" Dec 12 18:37:41.489150 containerd[1534]: time="2025-12-12T18:37:41.489109764Z" level=info msg="Start event monitor" Dec 12 18:37:41.489150 containerd[1534]: time="2025-12-12T18:37:41.489135717Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489020504Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489145854Z" level=info msg="Start streaming server" Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489241655Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489251130Z" level=info msg="runtime interface starting up..." Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489257278Z" level=info msg="starting plugins..." Dec 12 18:37:41.489292 containerd[1534]: time="2025-12-12T18:37:41.489296091Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:37:41.489543 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:37:41.490272 containerd[1534]: time="2025-12-12T18:37:41.490239926Z" level=info msg="containerd successfully booted in 0.110316s" Dec 12 18:37:42.517219 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:37:42.520177 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:48638.service - OpenSSH per-connection server daemon (10.0.0.1:48638). Dec 12 18:37:42.590488 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 48638 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:42.592062 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:42.601934 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:37:42.604871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:37:42.609057 systemd-logind[1514]: New session 1 of user core. Dec 12 18:37:42.628893 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:37:42.633437 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:37:42.654280 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:37:42.656695 systemd-logind[1514]: New session c1 of user core. Dec 12 18:37:42.810134 systemd[1621]: Queued start job for default target default.target. Dec 12 18:37:42.829319 systemd[1621]: Created slice app.slice - User Application Slice. Dec 12 18:37:42.829343 systemd[1621]: Reached target paths.target - Paths. Dec 12 18:37:42.829391 systemd[1621]: Reached target timers.target - Timers. Dec 12 18:37:42.830768 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:37:42.841266 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:37:42.841455 systemd[1621]: Reached target sockets.target - Sockets. Dec 12 18:37:42.841499 systemd[1621]: Reached target basic.target - Basic System. Dec 12 18:37:42.841536 systemd[1621]: Reached target default.target - Main User Target. Dec 12 18:37:42.841567 systemd[1621]: Startup finished in 177ms. Dec 12 18:37:42.842138 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:37:42.845736 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:37:42.911338 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:48648.service - OpenSSH per-connection server daemon (10.0.0.1:48648). Dec 12 18:37:42.965440 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 48648 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:42.966763 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:42.970683 systemd-logind[1514]: New session 2 of user core. Dec 12 18:37:42.978300 systemd-networkd[1439]: eth0: Gained IPv6LL Dec 12 18:37:42.980295 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:37:42.982459 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:37:42.985915 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:37:42.988902 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 18:37:42.991903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:37:43.004607 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:37:43.026411 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 18:37:43.026673 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 18:37:43.028881 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:37:43.031248 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:37:43.065084 sshd[1643]: Connection closed by 10.0.0.1 port 48648 Dec 12 18:37:43.065308 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:43.073527 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:48648.service: Deactivated successfully. Dec 12 18:37:43.075022 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:37:43.075798 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:37:43.077994 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:48662.service - OpenSSH per-connection server daemon (10.0.0.1:48662). Dec 12 18:37:43.080700 systemd-logind[1514]: Removed session 2. Dec 12 18:37:43.134315 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 48662 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:43.135647 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:43.139766 systemd-logind[1514]: New session 3 of user core. Dec 12 18:37:43.153304 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:37:43.206786 sshd[1664]: Connection closed by 10.0.0.1 port 48662 Dec 12 18:37:43.207065 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:43.209782 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:48662.service: Deactivated successfully. Dec 12 18:37:43.211635 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:37:43.213036 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:37:43.214369 systemd-logind[1514]: Removed session 3. Dec 12 18:37:43.725622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:37:43.728137 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:37:43.730215 systemd[1]: Startup finished in 3.005s (kernel) + 8.897s (initrd) + 6.253s (userspace) = 18.156s. Dec 12 18:37:43.735523 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:37:44.165364 kubelet[1674]: E1212 18:37:44.165194 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:37:44.169511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:37:44.169717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:37:44.170220 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 264.9M memory peak. Dec 12 18:37:53.033547 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:59786.service - OpenSSH per-connection server daemon (10.0.0.1:59786). Dec 12 18:37:53.098934 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 59786 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:53.101249 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:53.116517 systemd-logind[1514]: New session 4 of user core. Dec 12 18:37:53.126376 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:37:53.196752 sshd[1691]: Connection closed by 10.0.0.1 port 59786 Dec 12 18:37:53.196504 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:53.212894 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:59786.service: Deactivated successfully. Dec 12 18:37:53.215254 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:37:53.216590 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:37:53.226221 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:59802.service - OpenSSH per-connection server daemon (10.0.0.1:59802). Dec 12 18:37:53.227908 systemd-logind[1514]: Removed session 4. Dec 12 18:37:53.304847 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 59802 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:53.304666 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:53.317494 systemd-logind[1514]: New session 5 of user core. Dec 12 18:37:53.327590 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:37:53.387835 sshd[1700]: Connection closed by 10.0.0.1 port 59802 Dec 12 18:37:53.388445 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:53.404596 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:59802.service: Deactivated successfully. Dec 12 18:37:53.407025 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:37:53.413605 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:37:53.414517 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:59806.service - OpenSSH per-connection server daemon (10.0.0.1:59806). Dec 12 18:37:53.415703 systemd-logind[1514]: Removed session 5. Dec 12 18:37:53.486446 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 59806 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:53.488071 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:53.511352 systemd-logind[1514]: New session 6 of user core. Dec 12 18:37:53.528682 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:37:53.608723 sshd[1709]: Connection closed by 10.0.0.1 port 59806 Dec 12 18:37:53.609967 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:53.625726 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:59806.service: Deactivated successfully. Dec 12 18:37:53.632000 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:37:53.633264 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:37:53.637503 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:59808.service - OpenSSH per-connection server daemon (10.0.0.1:59808). Dec 12 18:37:53.638967 systemd-logind[1514]: Removed session 6. Dec 12 18:37:53.712264 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 59808 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:53.714217 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:53.720622 systemd-logind[1514]: New session 7 of user core. Dec 12 18:37:53.737050 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:37:53.799825 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:37:53.800127 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:37:53.825003 sudo[1719]: pam_unix(sudo:session): session closed for user root Dec 12 18:37:53.829369 sshd[1718]: Connection closed by 10.0.0.1 port 59808 Dec 12 18:37:53.827728 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:53.844132 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:59808.service: Deactivated successfully. Dec 12 18:37:53.846288 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:37:53.847595 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:37:53.850701 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:59812.service - OpenSSH per-connection server daemon (10.0.0.1:59812). Dec 12 18:37:53.851731 systemd-logind[1514]: Removed session 7. Dec 12 18:37:53.943811 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 59812 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:53.945782 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:53.951108 systemd-logind[1514]: New session 8 of user core. Dec 12 18:37:53.959532 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:37:54.018651 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:37:54.021043 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:37:54.036257 sudo[1730]: pam_unix(sudo:session): session closed for user root Dec 12 18:37:54.045420 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:37:54.045802 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:37:54.062425 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:37:54.141095 augenrules[1752]: No rules Dec 12 18:37:54.143038 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:37:54.145055 sudo[1729]: pam_unix(sudo:session): session closed for user root Dec 12 18:37:54.143483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:37:54.148402 sshd[1728]: Connection closed by 10.0.0.1 port 59812 Dec 12 18:37:54.148724 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:54.168867 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:59812.service: Deactivated successfully. Dec 12 18:37:54.172659 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:37:54.174710 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:37:54.177121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:37:54.180798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:37:54.182043 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:59824.service - OpenSSH per-connection server daemon (10.0.0.1:59824). Dec 12 18:37:54.183585 systemd-logind[1514]: Removed session 8. Dec 12 18:37:54.254576 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 59824 ssh2: RSA SHA256:XP/ExdhtEcSQYMAbRPAas7ojuw0It2lorfeil1KG79k Dec 12 18:37:54.255270 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:54.269720 systemd-logind[1514]: New session 9 of user core. Dec 12 18:37:54.286441 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:37:54.355183 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:37:54.355865 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:37:54.388747 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 18:37:54.448003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:37:54.453695 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:37:54.459382 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 18:37:54.459945 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 18:37:54.517583 kubelet[1782]: E1212 18:37:54.517446 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:37:54.523697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:37:54.523926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:37:54.524369 systemd[1]: kubelet.service: Consumed 277ms CPU time, 111.1M memory peak. Dec 12 18:37:55.073061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:37:55.073244 systemd[1]: kubelet.service: Consumed 277ms CPU time, 111.1M memory peak. Dec 12 18:37:55.075593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:37:55.100985 systemd[1]: Reload requested from client PID 1824 ('systemctl') (unit session-9.scope)... Dec 12 18:37:55.101002 systemd[1]: Reloading... Dec 12 18:37:55.194229 zram_generator::config[1871]: No configuration found. Dec 12 18:37:56.052951 systemd[1]: Reloading finished in 951 ms. Dec 12 18:37:56.136479 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:37:56.136624 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:37:56.137051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:37:56.137108 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.2M memory peak. Dec 12 18:37:56.139301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:37:56.404647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:37:56.420561 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:37:56.456833 kubelet[1913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:37:56.456833 kubelet[1913]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:37:56.456833 kubelet[1913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:37:56.457271 kubelet[1913]: I1212 18:37:56.456880 1913 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:37:56.908669 kubelet[1913]: I1212 18:37:56.908627 1913 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:37:56.908669 kubelet[1913]: I1212 18:37:56.908654 1913 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:37:56.908931 kubelet[1913]: I1212 18:37:56.908907 1913 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:37:56.937221 kubelet[1913]: I1212 18:37:56.937179 1913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:37:56.946256 kubelet[1913]: I1212 18:37:56.946216 1913 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:37:56.952174 kubelet[1913]: I1212 18:37:56.952128 1913 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:37:56.953540 kubelet[1913]: I1212 18:37:56.953490 1913 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:37:56.953708 kubelet[1913]: I1212 18:37:56.953537 1913 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.71","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:37:56.953811 kubelet[1913]: I1212 18:37:56.953713 1913 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:37:56.953811 kubelet[1913]: I1212 18:37:56.953723 1913 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:37:56.953902 kubelet[1913]: I1212 18:37:56.953884 1913 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:37:56.956475 kubelet[1913]: I1212 18:37:56.956448 1913 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:37:56.956475 kubelet[1913]: I1212 18:37:56.956475 1913 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:37:56.956537 kubelet[1913]: I1212 18:37:56.956498 1913 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:37:56.956537 kubelet[1913]: I1212 18:37:56.956509 1913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:37:56.956649 kubelet[1913]: E1212 18:37:56.956610 1913 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:37:56.956677 kubelet[1913]: E1212 18:37:56.956658 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:37:56.961191 kubelet[1913]: I1212 18:37:56.960139 1913 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:37:56.961191 kubelet[1913]: I1212 18:37:56.960623 1913 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:37:56.961191 kubelet[1913]: W1212 18:37:56.960674 1913 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:37:56.962694 kubelet[1913]: I1212 18:37:56.962663 1913 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:37:56.962737 kubelet[1913]: I1212 18:37:56.962709 1913 server.go:1287] "Started kubelet" Dec 12 18:37:56.963038 kubelet[1913]: I1212 18:37:56.963002 1913 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:37:56.964096 kubelet[1913]: I1212 18:37:56.964053 1913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:37:56.964321 kubelet[1913]: I1212 18:37:56.964235 1913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:37:56.964596 kubelet[1913]: I1212 18:37:56.964573 1913 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:37:56.964811 kubelet[1913]: I1212 18:37:56.964789 1913 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:37:56.967009 kubelet[1913]: E1212 18:37:56.966952 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:56.967009 kubelet[1913]: I1212 18:37:56.967011 1913 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:37:56.967107 kubelet[1913]: I1212 18:37:56.967090 1913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:37:56.967331 kubelet[1913]: I1212 18:37:56.967311 1913 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:37:56.967393 kubelet[1913]: I1212 18:37:56.967374 1913 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:37:56.968461 kubelet[1913]: W1212 18:37:56.968431 1913 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.71" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 12 18:37:56.968501 kubelet[1913]: E1212 18:37:56.968481 1913 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.71\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 12 18:37:56.968686 kubelet[1913]: W1212 18:37:56.968539 1913 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 12 18:37:56.968686 kubelet[1913]: E1212 18:37:56.968560 1913 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 12 18:37:56.969107 kubelet[1913]: I1212 18:37:56.969072 1913 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:37:56.969226 kubelet[1913]: I1212 18:37:56.969201 1913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:37:56.970693 kubelet[1913]: E1212 18:37:56.970670 1913 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:37:56.970740 kubelet[1913]: I1212 18:37:56.970714 1913 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:37:56.982792 kubelet[1913]: W1212 18:37:56.981986 1913 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 12 18:37:56.982792 kubelet[1913]: E1212 18:37:56.982021 1913 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 12 18:37:56.982792 kubelet[1913]: E1212 18:37:56.982133 1913 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.71\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 12 18:37:56.983669 kubelet[1913]: E1212 18:37:56.981845 1913 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.71.18808bb8265674d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.71,UID:10.0.0.71,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.71,},FirstTimestamp:2025-12-12 18:37:56.962677969 +0000 UTC m=+0.538290851,LastTimestamp:2025-12-12 18:37:56.962677969 +0000 UTC m=+0.538290851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.71,}" Dec 12 18:37:56.985614 kubelet[1913]: I1212 18:37:56.985594 1913 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:37:56.985614 kubelet[1913]: I1212 18:37:56.985611 1913 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:37:56.985687 kubelet[1913]: I1212 18:37:56.985625 1913 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:37:57.067999 kubelet[1913]: E1212 18:37:57.067953 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.168420 kubelet[1913]: E1212 18:37:57.168285 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.269002 kubelet[1913]: E1212 18:37:57.268936 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.369450 kubelet[1913]: E1212 18:37:57.369401 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.469750 kubelet[1913]: E1212 18:37:57.469584 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.544375 kubelet[1913]: I1212 18:37:57.544330 1913 policy_none.go:49] "None policy: Start" Dec 12 18:37:57.544375 kubelet[1913]: I1212 18:37:57.544367 1913 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:37:57.544375 kubelet[1913]: I1212 18:37:57.544380 1913 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:37:57.550171 kubelet[1913]: E1212 18:37:57.550109 1913 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.71\" not found" node="10.0.0.71" Dec 12 18:37:57.553745 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:37:57.564762 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:37:57.568617 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:37:57.570681 kubelet[1913]: E1212 18:37:57.570627 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.580829 kubelet[1913]: I1212 18:37:57.580750 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:37:57.580952 kubelet[1913]: I1212 18:37:57.580845 1913 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:37:57.581139 kubelet[1913]: I1212 18:37:57.581111 1913 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:37:57.581209 kubelet[1913]: I1212 18:37:57.581135 1913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:37:57.581443 kubelet[1913]: I1212 18:37:57.581415 1913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:37:57.583140 kubelet[1913]: I1212 18:37:57.583108 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:37:57.583140 kubelet[1913]: I1212 18:37:57.583139 1913 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:37:57.583378 kubelet[1913]: I1212 18:37:57.583297 1913 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:37:57.583378 kubelet[1913]: I1212 18:37:57.583313 1913 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:37:57.583473 kubelet[1913]: E1212 18:37:57.583455 1913 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 12 18:37:57.585545 kubelet[1913]: E1212 18:37:57.585442 1913 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:37:57.585545 kubelet[1913]: E1212 18:37:57.585488 1913 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.71\" not found" Dec 12 18:37:57.682433 kubelet[1913]: I1212 18:37:57.682388 1913 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.71" Dec 12 18:37:57.686844 kubelet[1913]: I1212 18:37:57.686810 1913 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.71" Dec 12 18:37:57.686844 kubelet[1913]: E1212 18:37:57.686834 1913 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.71\": node \"10.0.0.71\" not found" Dec 12 18:37:57.701627 kubelet[1913]: E1212 18:37:57.701586 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.802462 kubelet[1913]: E1212 18:37:57.802320 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.903152 kubelet[1913]: E1212 18:37:57.903083 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:57.910393 kubelet[1913]: I1212 18:37:57.910323 1913 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 12 18:37:57.910548 kubelet[1913]: W1212 18:37:57.910518 1913 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 12 18:37:57.956875 kubelet[1913]: E1212 18:37:57.956808 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:37:58.003691 kubelet[1913]: E1212 18:37:58.003644 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.063802 sudo[1768]: pam_unix(sudo:session): session closed for user root Dec 12 18:37:58.065282 sshd[1767]: Connection closed by 10.0.0.1 port 59824 Dec 12 18:37:58.065606 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:58.069707 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:59824.service: Deactivated successfully. Dec 12 18:37:58.071784 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:37:58.072012 systemd[1]: session-9.scope: Consumed 585ms CPU time, 72.6M memory peak. Dec 12 18:37:58.073374 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:37:58.074448 systemd-logind[1514]: Removed session 9. Dec 12 18:37:58.104463 kubelet[1913]: E1212 18:37:58.104399 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.204915 kubelet[1913]: E1212 18:37:58.204843 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.305808 kubelet[1913]: E1212 18:37:58.305723 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.406112 kubelet[1913]: E1212 18:37:58.405959 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.506110 kubelet[1913]: E1212 18:37:58.506049 1913 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 12 18:37:58.607432 kubelet[1913]: I1212 18:37:58.607384 1913 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 12 18:37:58.607753 containerd[1534]: time="2025-12-12T18:37:58.607707105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:37:58.608141 kubelet[1913]: I1212 18:37:58.607887 1913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 12 18:37:58.957285 kubelet[1913]: E1212 18:37:58.957237 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:37:58.957285 kubelet[1913]: I1212 18:37:58.957259 1913 apiserver.go:52] "Watching apiserver" Dec 12 18:37:58.959993 kubelet[1913]: E1212 18:37:58.959780 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:37:58.967301 systemd[1]: Created slice kubepods-besteffort-pod2ed82606_f50d_4ea0_9846_de4d3c635f6a.slice - libcontainer container kubepods-besteffort-pod2ed82606_f50d_4ea0_9846_de4d3c635f6a.slice. Dec 12 18:37:58.967977 kubelet[1913]: I1212 18:37:58.967941 1913 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:37:58.978975 kubelet[1913]: I1212 18:37:58.978943 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ed82606-f50d-4ea0-9846-de4d3c635f6a-lib-modules\") pod \"kube-proxy-x8f7p\" (UID: \"2ed82606-f50d-4ea0-9846-de4d3c635f6a\") " pod="kube-system/kube-proxy-x8f7p" Dec 12 18:37:58.979063 kubelet[1913]: I1212 18:37:58.978980 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-cni-bin-dir\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979063 kubelet[1913]: I1212 18:37:58.979002 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-cni-log-dir\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979063 kubelet[1913]: I1212 18:37:58.979023 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23425178-dc15-4b8b-a797-8dc237fc22c8-node-certs\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979304 kubelet[1913]: I1212 18:37:58.979043 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23425178-dc15-4b8b-a797-8dc237fc22c8-tigera-ca-bundle\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979367 kubelet[1913]: I1212 18:37:58.979355 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-xtables-lock\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979520 kubelet[1913]: I1212 18:37:58.979501 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e9495c-ad4c-46cf-aca9-046d6f589532-kubelet-dir\") pod \"csi-node-driver-ddqbs\" (UID: \"71e9495c-ad4c-46cf-aca9-046d6f589532\") " pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:37:58.979554 kubelet[1913]: I1212 18:37:58.979532 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71e9495c-ad4c-46cf-aca9-046d6f589532-socket-dir\") pod \"csi-node-driver-ddqbs\" (UID: \"71e9495c-ad4c-46cf-aca9-046d6f589532\") " pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:37:58.979588 kubelet[1913]: I1212 18:37:58.979559 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddj5x\" (UniqueName: \"kubernetes.io/projected/71e9495c-ad4c-46cf-aca9-046d6f589532-kube-api-access-ddj5x\") pod \"csi-node-driver-ddqbs\" (UID: \"71e9495c-ad4c-46cf-aca9-046d6f589532\") " pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:37:58.979588 kubelet[1913]: I1212 18:37:58.979579 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ed82606-f50d-4ea0-9846-de4d3c635f6a-kube-proxy\") pod \"kube-proxy-x8f7p\" (UID: \"2ed82606-f50d-4ea0-9846-de4d3c635f6a\") " pod="kube-system/kube-proxy-x8f7p" Dec 12 18:37:58.979650 kubelet[1913]: I1212 18:37:58.979602 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-var-run-calico\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979650 kubelet[1913]: I1212 18:37:58.979627 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71e9495c-ad4c-46cf-aca9-046d6f589532-registration-dir\") pod \"csi-node-driver-ddqbs\" (UID: \"71e9495c-ad4c-46cf-aca9-046d6f589532\") " pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:37:58.979708 kubelet[1913]: I1212 18:37:58.979652 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ed82606-f50d-4ea0-9846-de4d3c635f6a-xtables-lock\") pod \"kube-proxy-x8f7p\" (UID: \"2ed82606-f50d-4ea0-9846-de4d3c635f6a\") " pod="kube-system/kube-proxy-x8f7p" Dec 12 18:37:58.979739 kubelet[1913]: I1212 18:37:58.979676 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-lib-modules\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979775 kubelet[1913]: I1212 18:37:58.979745 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-policysync\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979775 kubelet[1913]: I1212 18:37:58.979769 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-var-lib-calico\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979833 kubelet[1913]: I1212 18:37:58.979793 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7fj\" (UniqueName: \"kubernetes.io/projected/23425178-dc15-4b8b-a797-8dc237fc22c8-kube-api-access-zr7fj\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979833 kubelet[1913]: I1212 18:37:58.979819 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71e9495c-ad4c-46cf-aca9-046d6f589532-varrun\") pod \"csi-node-driver-ddqbs\" (UID: \"71e9495c-ad4c-46cf-aca9-046d6f589532\") " pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:37:58.979898 kubelet[1913]: I1212 18:37:58.979842 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5jv\" (UniqueName: \"kubernetes.io/projected/2ed82606-f50d-4ea0-9846-de4d3c635f6a-kube-api-access-gc5jv\") pod \"kube-proxy-x8f7p\" (UID: \"2ed82606-f50d-4ea0-9846-de4d3c635f6a\") " pod="kube-system/kube-proxy-x8f7p" Dec 12 18:37:58.979898 kubelet[1913]: I1212 18:37:58.979864 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-cni-net-dir\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.979898 kubelet[1913]: I1212 18:37:58.979887 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23425178-dc15-4b8b-a797-8dc237fc22c8-flexvol-driver-host\") pod \"calico-node-528kl\" (UID: \"23425178-dc15-4b8b-a797-8dc237fc22c8\") " pod="calico-system/calico-node-528kl" Dec 12 18:37:58.981575 systemd[1]: Created slice kubepods-besteffort-pod23425178_dc15_4b8b_a797_8dc237fc22c8.slice - libcontainer container kubepods-besteffort-pod23425178_dc15_4b8b_a797_8dc237fc22c8.slice. Dec 12 18:37:59.082873 kubelet[1913]: E1212 18:37:59.082841 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:37:59.082873 kubelet[1913]: W1212 18:37:59.082860 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:37:59.082873 kubelet[1913]: E1212 18:37:59.082883 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:37:59.086180 kubelet[1913]: E1212 18:37:59.085835 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:37:59.086180 kubelet[1913]: W1212 18:37:59.085855 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:37:59.086180 kubelet[1913]: E1212 18:37:59.085872 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:37:59.092174 kubelet[1913]: E1212 18:37:59.089317 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:37:59.092174 kubelet[1913]: W1212 18:37:59.089340 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:37:59.092174 kubelet[1913]: E1212 18:37:59.089356 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:37:59.092174 kubelet[1913]: E1212 18:37:59.090085 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:37:59.092174 kubelet[1913]: W1212 18:37:59.090099 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:37:59.092174 kubelet[1913]: E1212 18:37:59.090114 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:37:59.096002 kubelet[1913]: E1212 18:37:59.095942 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:37:59.096002 kubelet[1913]: W1212 18:37:59.095994 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:37:59.096111 kubelet[1913]: E1212 18:37:59.096028 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:37:59.279537 kubelet[1913]: E1212 18:37:59.279394 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:59.280121 containerd[1534]: time="2025-12-12T18:37:59.280081311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8f7p,Uid:2ed82606-f50d-4ea0-9846-de4d3c635f6a,Namespace:kube-system,Attempt:0,}" Dec 12 18:37:59.284835 kubelet[1913]: E1212 18:37:59.284777 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:59.285402 containerd[1534]: time="2025-12-12T18:37:59.285216435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-528kl,Uid:23425178-dc15-4b8b-a797-8dc237fc22c8,Namespace:calico-system,Attempt:0,}" Dec 12 18:37:59.957620 kubelet[1913]: E1212 18:37:59.957582 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:00.259336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294521555.mount: Deactivated successfully. Dec 12 18:38:00.269526 containerd[1534]: time="2025-12-12T18:38:00.269469609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:00.271636 containerd[1534]: time="2025-12-12T18:38:00.271595580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:38:00.272668 containerd[1534]: time="2025-12-12T18:38:00.272616298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:00.273733 containerd[1534]: time="2025-12-12T18:38:00.273702568Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:00.274762 containerd[1534]: time="2025-12-12T18:38:00.274704995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 18:38:00.276772 containerd[1534]: time="2025-12-12T18:38:00.276729309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:00.277595 containerd[1534]: time="2025-12-12T18:38:00.277548562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 987.350822ms" Dec 12 18:38:00.278943 containerd[1534]: time="2025-12-12T18:38:00.278906362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 985.085421ms" Dec 12 18:38:00.306733 containerd[1534]: time="2025-12-12T18:38:00.306518623Z" level=info msg="connecting to shim 6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c" address="unix:///run/containerd/s/170a5b155f2b0a3ad08fc1f1b9a61f6e860fe2aed95f92927241bdfa5df009fc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:00.314711 containerd[1534]: time="2025-12-12T18:38:00.314658893Z" level=info msg="connecting to shim 91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961" address="unix:///run/containerd/s/61ef6eeb692007098407ba369b810c0964640f4b6a30c469faeb34122f3316eb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:00.333398 systemd[1]: Started cri-containerd-6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c.scope - libcontainer container 6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c. Dec 12 18:38:00.337340 systemd[1]: Started cri-containerd-91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961.scope - libcontainer container 91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961. Dec 12 18:38:00.371632 containerd[1534]: time="2025-12-12T18:38:00.371570319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8f7p,Uid:2ed82606-f50d-4ea0-9846-de4d3c635f6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961\"" Dec 12 18:38:00.372801 kubelet[1913]: E1212 18:38:00.372763 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:00.374037 containerd[1534]: time="2025-12-12T18:38:00.373995028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:38:00.375287 containerd[1534]: time="2025-12-12T18:38:00.375248046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-528kl,Uid:23425178-dc15-4b8b-a797-8dc237fc22c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\"" Dec 12 18:38:00.375940 kubelet[1913]: E1212 18:38:00.375909 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:00.584492 kubelet[1913]: E1212 18:38:00.584317 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:00.958028 kubelet[1913]: E1212 18:38:00.957918 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:01.518648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323964030.mount: Deactivated successfully. Dec 12 18:38:01.859663 containerd[1534]: time="2025-12-12T18:38:01.859509382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:01.860556 containerd[1534]: time="2025-12-12T18:38:01.860517947Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:38:01.862008 containerd[1534]: time="2025-12-12T18:38:01.861965045Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:01.864344 containerd[1534]: time="2025-12-12T18:38:01.864281691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:01.864950 containerd[1534]: time="2025-12-12T18:38:01.864919579Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.49088166s" Dec 12 18:38:01.864992 containerd[1534]: time="2025-12-12T18:38:01.864952154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:38:01.866414 containerd[1534]: time="2025-12-12T18:38:01.866381931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:38:01.867602 containerd[1534]: time="2025-12-12T18:38:01.867561022Z" level=info msg="CreateContainer within sandbox \"91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:38:01.878632 containerd[1534]: time="2025-12-12T18:38:01.878574697Z" level=info msg="Container bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:01.883033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585918745.mount: Deactivated successfully. Dec 12 18:38:01.890176 containerd[1534]: time="2025-12-12T18:38:01.889357055Z" level=info msg="CreateContainer within sandbox \"91c578b85ca0f56dfe4f7ecaf869912bd59318a7a56dbf0412ebec5bfc36a961\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe\"" Dec 12 18:38:01.890885 containerd[1534]: time="2025-12-12T18:38:01.890857285Z" level=info msg="StartContainer for \"bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe\"" Dec 12 18:38:01.892906 containerd[1534]: time="2025-12-12T18:38:01.892866505Z" level=info msg="connecting to shim bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe" address="unix:///run/containerd/s/61ef6eeb692007098407ba369b810c0964640f4b6a30c469faeb34122f3316eb" protocol=ttrpc version=3 Dec 12 18:38:01.915348 systemd[1]: Started cri-containerd-bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe.scope - libcontainer container bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe. Dec 12 18:38:01.958671 kubelet[1913]: E1212 18:38:01.958612 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:02.016286 containerd[1534]: time="2025-12-12T18:38:02.016235166Z" level=info msg="StartContainer for \"bc6e5a2e6bca26a8bbd2ee711ad721add870f6dc5c7caab0c9b716e14cc80abe\" returns successfully" Dec 12 18:38:02.583519 kubelet[1913]: E1212 18:38:02.583485 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:02.596805 kubelet[1913]: E1212 18:38:02.596768 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:02.691189 kubelet[1913]: E1212 18:38:02.691108 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.691189 kubelet[1913]: W1212 18:38:02.691135 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.691189 kubelet[1913]: E1212 18:38:02.691183 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.691400 kubelet[1913]: E1212 18:38:02.691383 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.691400 kubelet[1913]: W1212 18:38:02.691393 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.691474 kubelet[1913]: E1212 18:38:02.691403 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.691635 kubelet[1913]: E1212 18:38:02.691611 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.691635 kubelet[1913]: W1212 18:38:02.691623 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.691635 kubelet[1913]: E1212 18:38:02.691633 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.691893 kubelet[1913]: E1212 18:38:02.691874 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.691893 kubelet[1913]: W1212 18:38:02.691887 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.691982 kubelet[1913]: E1212 18:38:02.691897 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.692107 kubelet[1913]: E1212 18:38:02.692089 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.692107 kubelet[1913]: W1212 18:38:02.692101 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.692201 kubelet[1913]: E1212 18:38:02.692113 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.692359 kubelet[1913]: E1212 18:38:02.692337 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.692359 kubelet[1913]: W1212 18:38:02.692351 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.692359 kubelet[1913]: E1212 18:38:02.692361 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.692609 kubelet[1913]: E1212 18:38:02.692580 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.692609 kubelet[1913]: W1212 18:38:02.692591 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.692609 kubelet[1913]: E1212 18:38:02.692601 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.692821 kubelet[1913]: E1212 18:38:02.692791 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.692821 kubelet[1913]: W1212 18:38:02.692805 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.692821 kubelet[1913]: E1212 18:38:02.692814 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.693024 kubelet[1913]: E1212 18:38:02.693003 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.693024 kubelet[1913]: W1212 18:38:02.693015 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.693066 kubelet[1913]: E1212 18:38:02.693026 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.693255 kubelet[1913]: E1212 18:38:02.693226 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.693255 kubelet[1913]: W1212 18:38:02.693238 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.693309 kubelet[1913]: E1212 18:38:02.693259 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.693462 kubelet[1913]: E1212 18:38:02.693442 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.693462 kubelet[1913]: W1212 18:38:02.693453 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.693509 kubelet[1913]: E1212 18:38:02.693463 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.693663 kubelet[1913]: E1212 18:38:02.693644 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.693663 kubelet[1913]: W1212 18:38:02.693655 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.693713 kubelet[1913]: E1212 18:38:02.693665 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.693870 kubelet[1913]: E1212 18:38:02.693849 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.693870 kubelet[1913]: W1212 18:38:02.693861 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.693916 kubelet[1913]: E1212 18:38:02.693870 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.694084 kubelet[1913]: E1212 18:38:02.694063 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.694084 kubelet[1913]: W1212 18:38:02.694074 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.694131 kubelet[1913]: E1212 18:38:02.694084 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.694313 kubelet[1913]: E1212 18:38:02.694291 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.694313 kubelet[1913]: W1212 18:38:02.694303 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.694370 kubelet[1913]: E1212 18:38:02.694314 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.694510 kubelet[1913]: E1212 18:38:02.694490 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.694510 kubelet[1913]: W1212 18:38:02.694502 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.694569 kubelet[1913]: E1212 18:38:02.694511 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.694716 kubelet[1913]: E1212 18:38:02.694696 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.694716 kubelet[1913]: W1212 18:38:02.694707 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.694760 kubelet[1913]: E1212 18:38:02.694716 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.694910 kubelet[1913]: E1212 18:38:02.694890 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.694910 kubelet[1913]: W1212 18:38:02.694903 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.694962 kubelet[1913]: E1212 18:38:02.694913 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.695103 kubelet[1913]: E1212 18:38:02.695082 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.695103 kubelet[1913]: W1212 18:38:02.695093 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.695148 kubelet[1913]: E1212 18:38:02.695103 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.695319 kubelet[1913]: E1212 18:38:02.695298 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.695319 kubelet[1913]: W1212 18:38:02.695311 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.695379 kubelet[1913]: E1212 18:38:02.695321 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.704672 kubelet[1913]: E1212 18:38:02.704617 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.704672 kubelet[1913]: W1212 18:38:02.704643 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.704672 kubelet[1913]: E1212 18:38:02.704670 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.704942 kubelet[1913]: E1212 18:38:02.704921 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.704942 kubelet[1913]: W1212 18:38:02.704936 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.705027 kubelet[1913]: E1212 18:38:02.704948 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.705269 kubelet[1913]: E1212 18:38:02.705230 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.705269 kubelet[1913]: W1212 18:38:02.705248 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.705269 kubelet[1913]: E1212 18:38:02.705268 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.705474 kubelet[1913]: E1212 18:38:02.705463 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.705474 kubelet[1913]: W1212 18:38:02.705472 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.705535 kubelet[1913]: E1212 18:38:02.705485 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.705663 kubelet[1913]: E1212 18:38:02.705641 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.705663 kubelet[1913]: W1212 18:38:02.705651 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.705663 kubelet[1913]: E1212 18:38:02.705663 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.705914 kubelet[1913]: E1212 18:38:02.705894 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.705914 kubelet[1913]: W1212 18:38:02.705906 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.705968 kubelet[1913]: E1212 18:38:02.705919 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.706322 kubelet[1913]: E1212 18:38:02.706283 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.706322 kubelet[1913]: W1212 18:38:02.706300 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.706322 kubelet[1913]: E1212 18:38:02.706320 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.706562 kubelet[1913]: E1212 18:38:02.706537 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.706562 kubelet[1913]: W1212 18:38:02.706549 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.706717 kubelet[1913]: E1212 18:38:02.706564 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.706790 kubelet[1913]: E1212 18:38:02.706765 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.706790 kubelet[1913]: W1212 18:38:02.706777 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.706855 kubelet[1913]: E1212 18:38:02.706792 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.707003 kubelet[1913]: E1212 18:38:02.706978 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.707003 kubelet[1913]: W1212 18:38:02.706992 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.707076 kubelet[1913]: E1212 18:38:02.707005 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.707262 kubelet[1913]: E1212 18:38:02.707237 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.707262 kubelet[1913]: W1212 18:38:02.707247 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.707262 kubelet[1913]: E1212 18:38:02.707255 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.707603 kubelet[1913]: E1212 18:38:02.707567 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:02.707603 kubelet[1913]: W1212 18:38:02.707578 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:02.707603 kubelet[1913]: E1212 18:38:02.707586 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:02.959485 kubelet[1913]: E1212 18:38:02.959327 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:03.402366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226097772.mount: Deactivated successfully. Dec 12 18:38:03.461644 containerd[1534]: time="2025-12-12T18:38:03.461585481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:03.462359 containerd[1534]: time="2025-12-12T18:38:03.462331801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Dec 12 18:38:03.463397 containerd[1534]: time="2025-12-12T18:38:03.463358650Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:03.465280 containerd[1534]: time="2025-12-12T18:38:03.465242746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:03.465796 containerd[1534]: time="2025-12-12T18:38:03.465751707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.599330049s" Dec 12 18:38:03.465796 containerd[1534]: time="2025-12-12T18:38:03.465785734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:38:03.467529 containerd[1534]: time="2025-12-12T18:38:03.467504594Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:38:03.476776 containerd[1534]: time="2025-12-12T18:38:03.476728088Z" level=info msg="Container 3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:03.487843 containerd[1534]: time="2025-12-12T18:38:03.487799591Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2\"" Dec 12 18:38:03.488403 containerd[1534]: time="2025-12-12T18:38:03.488371508Z" level=info msg="StartContainer for \"3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2\"" Dec 12 18:38:03.489917 containerd[1534]: time="2025-12-12T18:38:03.489895574Z" level=info msg="connecting to shim 3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2" address="unix:///run/containerd/s/170a5b155f2b0a3ad08fc1f1b9a61f6e860fe2aed95f92927241bdfa5df009fc" protocol=ttrpc version=3 Dec 12 18:38:03.510324 systemd[1]: Started cri-containerd-3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2.scope - libcontainer container 3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2. Dec 12 18:38:03.600194 kubelet[1913]: E1212 18:38:03.600142 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:03.600653 kubelet[1913]: E1212 18:38:03.600577 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.600653 kubelet[1913]: W1212 18:38:03.600651 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.600737 kubelet[1913]: E1212 18:38:03.600664 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.600850 kubelet[1913]: E1212 18:38:03.600835 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.600850 kubelet[1913]: W1212 18:38:03.600848 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.600906 kubelet[1913]: E1212 18:38:03.600859 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.601048 kubelet[1913]: E1212 18:38:03.601037 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.601048 kubelet[1913]: W1212 18:38:03.601047 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.601118 kubelet[1913]: E1212 18:38:03.601056 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.601342 kubelet[1913]: E1212 18:38:03.601301 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.601342 kubelet[1913]: W1212 18:38:03.601314 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.601342 kubelet[1913]: E1212 18:38:03.601324 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.601539 kubelet[1913]: E1212 18:38:03.601518 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.601539 kubelet[1913]: W1212 18:38:03.601530 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.601539 kubelet[1913]: E1212 18:38:03.601541 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.601728 kubelet[1913]: E1212 18:38:03.601708 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.601728 kubelet[1913]: W1212 18:38:03.601723 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.601795 kubelet[1913]: E1212 18:38:03.601734 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.601973 kubelet[1913]: E1212 18:38:03.601950 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.601973 kubelet[1913]: W1212 18:38:03.601963 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.601973 kubelet[1913]: E1212 18:38:03.601972 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.602188 kubelet[1913]: E1212 18:38:03.602156 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.602188 kubelet[1913]: W1212 18:38:03.602188 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.602239 kubelet[1913]: E1212 18:38:03.602198 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.602664 kubelet[1913]: E1212 18:38:03.602397 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.602664 kubelet[1913]: W1212 18:38:03.602412 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.602664 kubelet[1913]: E1212 18:38:03.602423 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.602664 kubelet[1913]: E1212 18:38:03.602603 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.602664 kubelet[1913]: W1212 18:38:03.602612 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.602664 kubelet[1913]: E1212 18:38:03.602621 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.602856 kubelet[1913]: E1212 18:38:03.602840 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.602856 kubelet[1913]: W1212 18:38:03.602854 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.602902 kubelet[1913]: E1212 18:38:03.602864 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.603071 kubelet[1913]: E1212 18:38:03.603040 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.603114 kubelet[1913]: W1212 18:38:03.603076 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.603114 kubelet[1913]: E1212 18:38:03.603086 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.603332 kubelet[1913]: E1212 18:38:03.603313 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.603409 kubelet[1913]: W1212 18:38:03.603326 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.603436 kubelet[1913]: E1212 18:38:03.603412 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.603634 kubelet[1913]: E1212 18:38:03.603618 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.603634 kubelet[1913]: W1212 18:38:03.603630 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.603719 kubelet[1913]: E1212 18:38:03.603639 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.603998 kubelet[1913]: E1212 18:38:03.603984 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.603998 kubelet[1913]: W1212 18:38:03.603994 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.604149 kubelet[1913]: E1212 18:38:03.604003 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.604241 kubelet[1913]: E1212 18:38:03.604227 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.604241 kubelet[1913]: W1212 18:38:03.604236 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.604241 kubelet[1913]: E1212 18:38:03.604244 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.604425 kubelet[1913]: E1212 18:38:03.604405 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.604425 kubelet[1913]: W1212 18:38:03.604416 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.604482 kubelet[1913]: E1212 18:38:03.604427 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.604611 kubelet[1913]: E1212 18:38:03.604584 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.604611 kubelet[1913]: W1212 18:38:03.604592 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.604611 kubelet[1913]: E1212 18:38:03.604600 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.604766 kubelet[1913]: E1212 18:38:03.604749 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.604766 kubelet[1913]: W1212 18:38:03.604759 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.604854 kubelet[1913]: E1212 18:38:03.604769 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.604973 kubelet[1913]: E1212 18:38:03.604955 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.605011 kubelet[1913]: W1212 18:38:03.604975 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.605011 kubelet[1913]: E1212 18:38:03.604983 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.608796 containerd[1534]: time="2025-12-12T18:38:03.608763509Z" level=info msg="StartContainer for \"3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2\" returns successfully" Dec 12 18:38:03.610626 kubelet[1913]: E1212 18:38:03.610554 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.610626 kubelet[1913]: W1212 18:38:03.610571 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.610626 kubelet[1913]: E1212 18:38:03.610586 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.610790 kubelet[1913]: E1212 18:38:03.610776 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.610823 kubelet[1913]: W1212 18:38:03.610790 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.610823 kubelet[1913]: E1212 18:38:03.610807 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.610999 kubelet[1913]: E1212 18:38:03.610988 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.610999 kubelet[1913]: W1212 18:38:03.610997 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.611066 kubelet[1913]: E1212 18:38:03.611008 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.611205 kubelet[1913]: E1212 18:38:03.611193 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.611233 kubelet[1913]: W1212 18:38:03.611204 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.611233 kubelet[1913]: E1212 18:38:03.611219 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.611397 kubelet[1913]: E1212 18:38:03.611385 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.611397 kubelet[1913]: W1212 18:38:03.611395 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.611463 kubelet[1913]: E1212 18:38:03.611407 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.611644 kubelet[1913]: E1212 18:38:03.611632 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.611644 kubelet[1913]: W1212 18:38:03.611641 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.611690 kubelet[1913]: E1212 18:38:03.611650 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.611998 kubelet[1913]: E1212 18:38:03.611983 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.611998 kubelet[1913]: W1212 18:38:03.611993 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.612115 kubelet[1913]: E1212 18:38:03.612018 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.612181 kubelet[1913]: E1212 18:38:03.612147 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.612181 kubelet[1913]: W1212 18:38:03.612168 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.612181 kubelet[1913]: E1212 18:38:03.612176 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.612356 kubelet[1913]: E1212 18:38:03.612339 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.612356 kubelet[1913]: W1212 18:38:03.612352 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.612458 kubelet[1913]: E1212 18:38:03.612363 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.612536 kubelet[1913]: E1212 18:38:03.612522 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.612536 kubelet[1913]: W1212 18:38:03.612533 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.612600 kubelet[1913]: E1212 18:38:03.612543 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.612757 kubelet[1913]: E1212 18:38:03.612723 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.612757 kubelet[1913]: W1212 18:38:03.612744 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.612757 kubelet[1913]: E1212 18:38:03.612754 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.613089 kubelet[1913]: E1212 18:38:03.613073 1913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:38:03.613089 kubelet[1913]: W1212 18:38:03.613084 1913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:38:03.613181 kubelet[1913]: E1212 18:38:03.613093 1913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:38:03.616414 systemd[1]: cri-containerd-3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2.scope: Deactivated successfully. Dec 12 18:38:03.618296 containerd[1534]: time="2025-12-12T18:38:03.618252329Z" level=info msg="received container exit event container_id:\"3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2\" id:\"3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2\" pid:2286 exited_at:{seconds:1765564683 nanos:617892679}" Dec 12 18:38:03.959851 kubelet[1913]: E1212 18:38:03.959812 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:04.378837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ac8483bbab645b1b9ea30b0f41bcf983df6f5ac28ee0047e89b0a93c4d0ddb2-rootfs.mount: Deactivated successfully. Dec 12 18:38:04.584045 kubelet[1913]: E1212 18:38:04.583954 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:04.603806 kubelet[1913]: E1212 18:38:04.603765 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:04.604512 containerd[1534]: time="2025-12-12T18:38:04.604462186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:38:04.799453 kubelet[1913]: I1212 18:38:04.799284 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x8f7p" podStartSLOduration=6.306887071 podStartE2EDuration="7.799267294s" podCreationTimestamp="2025-12-12 18:37:57 +0000 UTC" firstStartedPulling="2025-12-12 18:38:00.373553485 +0000 UTC m=+3.949166367" lastFinishedPulling="2025-12-12 18:38:01.865933708 +0000 UTC m=+5.441546590" observedRunningTime="2025-12-12 18:38:02.604232898 +0000 UTC m=+6.179845780" watchObservedRunningTime="2025-12-12 18:38:04.799267294 +0000 UTC m=+8.374880176" Dec 12 18:38:04.960833 kubelet[1913]: E1212 18:38:04.960793 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:05.961896 kubelet[1913]: E1212 18:38:05.961143 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:06.584273 kubelet[1913]: E1212 18:38:06.584191 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:06.962212 kubelet[1913]: E1212 18:38:06.962046 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:07.962503 kubelet[1913]: E1212 18:38:07.962447 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:08.323814 containerd[1534]: time="2025-12-12T18:38:08.323664715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:08.327040 containerd[1534]: time="2025-12-12T18:38:08.326968407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:38:08.346811 containerd[1534]: time="2025-12-12T18:38:08.346772872Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:08.351897 containerd[1534]: time="2025-12-12T18:38:08.351844257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:08.352611 containerd[1534]: time="2025-12-12T18:38:08.352580443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.748075407s" Dec 12 18:38:08.352611 containerd[1534]: time="2025-12-12T18:38:08.352605802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:38:08.354344 containerd[1534]: time="2025-12-12T18:38:08.354236458Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:38:08.366210 containerd[1534]: time="2025-12-12T18:38:08.366149842Z" level=info msg="Container 8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:08.379440 containerd[1534]: time="2025-12-12T18:38:08.379379458Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7\"" Dec 12 18:38:08.380290 containerd[1534]: time="2025-12-12T18:38:08.380236021Z" level=info msg="StartContainer for \"8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7\"" Dec 12 18:38:08.382063 containerd[1534]: time="2025-12-12T18:38:08.382027401Z" level=info msg="connecting to shim 8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7" address="unix:///run/containerd/s/170a5b155f2b0a3ad08fc1f1b9a61f6e860fe2aed95f92927241bdfa5df009fc" protocol=ttrpc version=3 Dec 12 18:38:08.407377 systemd[1]: Started cri-containerd-8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7.scope - libcontainer container 8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7. Dec 12 18:38:08.556187 containerd[1534]: time="2025-12-12T18:38:08.556113434Z" level=info msg="StartContainer for \"8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7\" returns successfully" Dec 12 18:38:08.584243 kubelet[1913]: E1212 18:38:08.584087 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:08.613957 kubelet[1913]: E1212 18:38:08.613924 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:08.963663 kubelet[1913]: E1212 18:38:08.963538 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:09.614535 kubelet[1913]: E1212 18:38:09.614487 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:09.963732 kubelet[1913]: E1212 18:38:09.963618 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:10.584239 kubelet[1913]: E1212 18:38:10.584186 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:10.650595 containerd[1534]: time="2025-12-12T18:38:10.650546720Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:38:10.653509 systemd[1]: cri-containerd-8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7.scope: Deactivated successfully. Dec 12 18:38:10.653900 systemd[1]: cri-containerd-8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7.scope: Consumed 592ms CPU time, 191.6M memory peak, 171.3M written to disk. Dec 12 18:38:10.654959 containerd[1534]: time="2025-12-12T18:38:10.654930547Z" level=info msg="received container exit event container_id:\"8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7\" id:\"8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7\" pid:2374 exited_at:{seconds:1765564690 nanos:654749158}" Dec 12 18:38:10.668577 kubelet[1913]: I1212 18:38:10.668545 1913 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:38:10.675429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dd97af4698d02fe6370ebec5540dd72634b86207a7afc6d27c192f785beb6a7-rootfs.mount: Deactivated successfully. Dec 12 18:38:10.964468 kubelet[1913]: E1212 18:38:10.964319 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:11.623334 kubelet[1913]: E1212 18:38:11.623155 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:11.624008 containerd[1534]: time="2025-12-12T18:38:11.623965505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:38:11.964845 kubelet[1913]: E1212 18:38:11.964707 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:12.589448 systemd[1]: Created slice kubepods-besteffort-pod71e9495c_ad4c_46cf_aca9_046d6f589532.slice - libcontainer container kubepods-besteffort-pod71e9495c_ad4c_46cf_aca9_046d6f589532.slice. Dec 12 18:38:12.591576 containerd[1534]: time="2025-12-12T18:38:12.591546672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ddqbs,Uid:71e9495c-ad4c-46cf-aca9-046d6f589532,Namespace:calico-system,Attempt:0,}" Dec 12 18:38:12.686812 containerd[1534]: time="2025-12-12T18:38:12.686759006Z" level=error msg="Failed to destroy network for sandbox \"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:12.688690 systemd[1]: run-netns-cni\x2d8bb63cf4\x2d8cfc\x2dd8de\x2d9866\x2dd62ac3f1a6b1.mount: Deactivated successfully. Dec 12 18:38:12.689795 containerd[1534]: time="2025-12-12T18:38:12.689732811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ddqbs,Uid:71e9495c-ad4c-46cf-aca9-046d6f589532,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:12.690004 kubelet[1913]: E1212 18:38:12.689959 1913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:12.690084 kubelet[1913]: E1212 18:38:12.690055 1913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:38:12.690111 kubelet[1913]: E1212 18:38:12.690087 1913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ddqbs" Dec 12 18:38:12.690207 kubelet[1913]: E1212 18:38:12.690145 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8c8d28cbd7ce67ce101fb3ad5e3c35dd33888be534dc11eb08d060bce9e2b95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:12.966613 kubelet[1913]: E1212 18:38:12.966253 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:13.967014 kubelet[1913]: E1212 18:38:13.966955 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:14.417081 systemd[1]: Created slice kubepods-besteffort-pode36e64b6_fa37_4cf3_a1ce_699077993940.slice - libcontainer container kubepods-besteffort-pode36e64b6_fa37_4cf3_a1ce_699077993940.slice. Dec 12 18:38:14.480008 kubelet[1913]: I1212 18:38:14.479957 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm22t\" (UniqueName: \"kubernetes.io/projected/e36e64b6-fa37-4cf3-a1ce-699077993940-kube-api-access-lm22t\") pod \"nginx-deployment-7fcdb87857-kx4r4\" (UID: \"e36e64b6-fa37-4cf3-a1ce-699077993940\") " pod="default/nginx-deployment-7fcdb87857-kx4r4" Dec 12 18:38:14.721058 containerd[1534]: time="2025-12-12T18:38:14.720609092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-kx4r4,Uid:e36e64b6-fa37-4cf3-a1ce-699077993940,Namespace:default,Attempt:0,}" Dec 12 18:38:14.861927 containerd[1534]: time="2025-12-12T18:38:14.861877855Z" level=error msg="Failed to destroy network for sandbox \"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:14.864068 systemd[1]: run-netns-cni\x2d5789cc61\x2da647\x2d1272\x2d591b\x2d8a79943ccbad.mount: Deactivated successfully. Dec 12 18:38:14.967754 kubelet[1913]: E1212 18:38:14.967707 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:15.254472 containerd[1534]: time="2025-12-12T18:38:15.254410445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-kx4r4,Uid:e36e64b6-fa37-4cf3-a1ce-699077993940,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:15.254700 kubelet[1913]: E1212 18:38:15.254660 1913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:38:15.254804 kubelet[1913]: E1212 18:38:15.254721 1913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-kx4r4" Dec 12 18:38:15.254804 kubelet[1913]: E1212 18:38:15.254742 1913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-kx4r4" Dec 12 18:38:15.254804 kubelet[1913]: E1212 18:38:15.254778 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-kx4r4_default(e36e64b6-fa37-4cf3-a1ce-699077993940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-kx4r4_default(e36e64b6-fa37-4cf3-a1ce-699077993940)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5643c27b0fa0617002e122d0af5fa124fcaa24c7cd8afc96ed8c4e6827f0288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-kx4r4" podUID="e36e64b6-fa37-4cf3-a1ce-699077993940" Dec 12 18:38:15.731522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752117734.mount: Deactivated successfully. Dec 12 18:38:15.968517 kubelet[1913]: E1212 18:38:15.968448 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:16.216707 containerd[1534]: time="2025-12-12T18:38:16.216565102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:16.217295 containerd[1534]: time="2025-12-12T18:38:16.217273572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:38:16.218409 containerd[1534]: time="2025-12-12T18:38:16.218345677Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:16.220276 containerd[1534]: time="2025-12-12T18:38:16.220236540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:16.220745 containerd[1534]: time="2025-12-12T18:38:16.220714231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.596704889s" Dec 12 18:38:16.220779 containerd[1534]: time="2025-12-12T18:38:16.220742552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:38:16.229869 containerd[1534]: time="2025-12-12T18:38:16.229829633Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:38:16.239254 containerd[1534]: time="2025-12-12T18:38:16.239214478Z" level=info msg="Container 791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:16.248809 containerd[1534]: time="2025-12-12T18:38:16.248760311Z" level=info msg="CreateContainer within sandbox \"6982d24afccff523c476d9523be781b6c8471be01a274a9dee05e0816c37736c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd\"" Dec 12 18:38:16.251180 containerd[1534]: time="2025-12-12T18:38:16.249232530Z" level=info msg="StartContainer for \"791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd\"" Dec 12 18:38:16.251180 containerd[1534]: time="2025-12-12T18:38:16.250562562Z" level=info msg="connecting to shim 791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd" address="unix:///run/containerd/s/170a5b155f2b0a3ad08fc1f1b9a61f6e860fe2aed95f92927241bdfa5df009fc" protocol=ttrpc version=3 Dec 12 18:38:16.275284 systemd[1]: Started cri-containerd-791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd.scope - libcontainer container 791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd. Dec 12 18:38:16.379994 containerd[1534]: time="2025-12-12T18:38:16.379937167Z" level=info msg="StartContainer for \"791e3106338fb8e08cc76f3589f247e94a789227400a3b2a04c1222c55ba6ebd\" returns successfully" Dec 12 18:38:16.451036 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:38:16.451217 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:38:16.639418 kubelet[1913]: E1212 18:38:16.639322 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:16.653030 kubelet[1913]: I1212 18:38:16.652967 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-528kl" podStartSLOduration=3.808068316 podStartE2EDuration="19.652947451s" podCreationTimestamp="2025-12-12 18:37:57 +0000 UTC" firstStartedPulling="2025-12-12 18:38:00.376400477 +0000 UTC m=+3.952013359" lastFinishedPulling="2025-12-12 18:38:16.221279612 +0000 UTC m=+19.796892494" observedRunningTime="2025-12-12 18:38:16.652598297 +0000 UTC m=+20.228211189" watchObservedRunningTime="2025-12-12 18:38:16.652947451 +0000 UTC m=+20.228560333" Dec 12 18:38:16.957670 kubelet[1913]: E1212 18:38:16.957525 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:16.969126 kubelet[1913]: E1212 18:38:16.969073 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:17.969702 kubelet[1913]: E1212 18:38:17.969596 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:18.072315 systemd-networkd[1439]: vxlan.calico: Link UP Dec 12 18:38:18.072330 systemd-networkd[1439]: vxlan.calico: Gained carrier Dec 12 18:38:18.969991 kubelet[1913]: E1212 18:38:18.969922 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:19.969425 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Dec 12 18:38:19.970343 kubelet[1913]: E1212 18:38:19.970309 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:20.985434 kubelet[1913]: E1212 18:38:20.985382 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:21.986212 kubelet[1913]: E1212 18:38:21.986076 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:22.986530 kubelet[1913]: E1212 18:38:22.986475 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:23.987474 kubelet[1913]: E1212 18:38:23.987415 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:24.987624 kubelet[1913]: E1212 18:38:24.987550 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:25.585144 containerd[1534]: time="2025-12-12T18:38:25.585092435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ddqbs,Uid:71e9495c-ad4c-46cf-aca9-046d6f589532,Namespace:calico-system,Attempt:0,}" Dec 12 18:38:25.673873 systemd-networkd[1439]: calib90c4b7f5e4: Link UP Dec 12 18:38:25.674787 systemd-networkd[1439]: calib90c4b7f5e4: Gained carrier Dec 12 18:38:25.686270 containerd[1534]: 2025-12-12 18:38:25.617 [INFO][2752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-csi--node--driver--ddqbs-eth0 csi-node-driver- calico-system 71e9495c-ad4c-46cf-aca9-046d6f589532 1047 0 2025-12-12 18:37:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.71 csi-node-driver-ddqbs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib90c4b7f5e4 [] [] }} ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-" Dec 12 18:38:25.686270 containerd[1534]: 2025-12-12 18:38:25.617 [INFO][2752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.686270 containerd[1534]: 2025-12-12 18:38:25.641 [INFO][2766] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" HandleID="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Workload="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.641 [INFO][2766] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" HandleID="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Workload="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.71", "pod":"csi-node-driver-ddqbs", "timestamp":"2025-12-12 18:38:25.641599513 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.641 [INFO][2766] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.641 [INFO][2766] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.641 [INFO][2766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.647 [INFO][2766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" host="10.0.0.71" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.650 [INFO][2766] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.71" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.655 [INFO][2766] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.656 [INFO][2766] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.658 [INFO][2766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:25.686537 containerd[1534]: 2025-12-12 18:38:25.658 [INFO][2766] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" host="10.0.0.71" Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.659 [INFO][2766] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9 Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.663 [INFO][2766] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" host="10.0.0.71" Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.666 [INFO][2766] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.77.65/26] block=192.168.77.64/26 handle="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" host="10.0.0.71" Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.666 [INFO][2766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.65/26] handle="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" host="10.0.0.71" Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.666 [INFO][2766] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:38:25.686966 containerd[1534]: 2025-12-12 18:38:25.666 [INFO][2766] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.77.65/26] IPv6=[] ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" HandleID="k8s-pod-network.db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Workload="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.687235 containerd[1534]: 2025-12-12 18:38:25.671 [INFO][2752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--ddqbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71e9495c-ad4c-46cf-aca9-046d6f589532", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"csi-node-driver-ddqbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib90c4b7f5e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:25.687342 containerd[1534]: 2025-12-12 18:38:25.671 [INFO][2752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.65/32] ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.687342 containerd[1534]: 2025-12-12 18:38:25.671 [INFO][2752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib90c4b7f5e4 ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.687342 containerd[1534]: 2025-12-12 18:38:25.674 [INFO][2752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.687451 containerd[1534]: 2025-12-12 18:38:25.674 [INFO][2752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-csi--node--driver--ddqbs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71e9495c-ad4c-46cf-aca9-046d6f589532", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9", Pod:"csi-node-driver-ddqbs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib90c4b7f5e4", MAC:"3e:a5:bb:77:25:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:25.687548 containerd[1534]: 2025-12-12 18:38:25.681 [INFO][2752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" Namespace="calico-system" Pod="csi-node-driver-ddqbs" WorkloadEndpoint="10.0.0.71-k8s-csi--node--driver--ddqbs-eth0" Dec 12 18:38:25.711104 containerd[1534]: time="2025-12-12T18:38:25.711032645Z" level=info msg="connecting to shim db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9" address="unix:///run/containerd/s/fb7bf5c31caec7ad085831aed7a6cb607c436fda63d10c8ee9c7293d823c106e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:25.737400 systemd[1]: Started cri-containerd-db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9.scope - libcontainer container db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9. Dec 12 18:38:25.750415 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:38:25.764798 containerd[1534]: time="2025-12-12T18:38:25.764741597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ddqbs,Uid:71e9495c-ad4c-46cf-aca9-046d6f589532,Namespace:calico-system,Attempt:0,} returns sandbox id \"db30219df2e6fee7d5a8ad5c5dc9f9a31439ea915d1ea61c39dd2a3d385657d9\"" Dec 12 18:38:25.766226 containerd[1534]: time="2025-12-12T18:38:25.766197180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:38:25.988530 kubelet[1913]: E1212 18:38:25.988477 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:26.125937 containerd[1534]: time="2025-12-12T18:38:26.125874024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:26.127117 containerd[1534]: time="2025-12-12T18:38:26.127073092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:38:26.127199 containerd[1534]: time="2025-12-12T18:38:26.127126611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:38:26.127367 kubelet[1913]: E1212 18:38:26.127322 1913 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:26.127429 kubelet[1913]: E1212 18:38:26.127380 1913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:26.127567 kubelet[1913]: E1212 18:38:26.127519 1913 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddj5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:26.129460 containerd[1534]: time="2025-12-12T18:38:26.129423589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:38:26.501644 containerd[1534]: time="2025-12-12T18:38:26.501588080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:26.502914 containerd[1534]: time="2025-12-12T18:38:26.502863593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:38:26.502988 containerd[1534]: time="2025-12-12T18:38:26.502909838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:38:26.503131 kubelet[1913]: E1212 18:38:26.503080 1913 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:26.503206 kubelet[1913]: E1212 18:38:26.503132 1913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:26.503299 kubelet[1913]: E1212 18:38:26.503251 1913 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddj5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:26.504489 kubelet[1913]: E1212 18:38:26.504432 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:26.575087 update_engine[1519]: I20251212 18:38:26.575026 1519 update_attempter.cc:509] Updating boot flags... Dec 12 18:38:26.663183 kubelet[1913]: E1212 18:38:26.659032 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:26.988963 kubelet[1913]: E1212 18:38:26.988931 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:27.329488 systemd-networkd[1439]: calib90c4b7f5e4: Gained IPv6LL Dec 12 18:38:27.584927 containerd[1534]: time="2025-12-12T18:38:27.584695995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-kx4r4,Uid:e36e64b6-fa37-4cf3-a1ce-699077993940,Namespace:default,Attempt:0,}" Dec 12 18:38:27.658654 kubelet[1913]: E1212 18:38:27.658596 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:27.672251 systemd-networkd[1439]: cali1a1a78dbd36: Link UP Dec 12 18:38:27.672847 systemd-networkd[1439]: cali1a1a78dbd36: Gained carrier Dec 12 18:38:27.673010 kubelet[1913]: I1212 18:38:27.672980 1913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:38:27.674684 kubelet[1913]: E1212 18:38:27.674384 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:27.685184 containerd[1534]: 2025-12-12 18:38:27.619 [INFO][2847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0 nginx-deployment-7fcdb87857- default e36e64b6-fa37-4cf3-a1ce-699077993940 1166 0 2025-12-12 18:38:14 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.71 nginx-deployment-7fcdb87857-kx4r4 eth0 default [] [] [kns.default ksa.default.default] cali1a1a78dbd36 [] [] }} ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-" Dec 12 18:38:27.685184 containerd[1534]: 2025-12-12 18:38:27.619 [INFO][2847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.685184 containerd[1534]: 2025-12-12 18:38:27.640 [INFO][2863] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" HandleID="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Workload="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.640 [INFO][2863] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" HandleID="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Workload="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"nginx-deployment-7fcdb87857-kx4r4", "timestamp":"2025-12-12 18:38:27.640537016 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.641 [INFO][2863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.641 [INFO][2863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.641 [INFO][2863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.647 [INFO][2863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" host="10.0.0.71" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.650 [INFO][2863] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.71" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.654 [INFO][2863] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.655 [INFO][2863] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.657 [INFO][2863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:27.685418 containerd[1534]: 2025-12-12 18:38:27.657 [INFO][2863] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" host="10.0.0.71" Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.659 [INFO][2863] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.662 [INFO][2863] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" host="10.0.0.71" Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.666 [INFO][2863] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.77.66/26] block=192.168.77.64/26 handle="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" host="10.0.0.71" Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.666 [INFO][2863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.66/26] handle="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" host="10.0.0.71" Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.666 [INFO][2863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:38:27.685691 containerd[1534]: 2025-12-12 18:38:27.666 [INFO][2863] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.77.66/26] IPv6=[] ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" HandleID="k8s-pod-network.198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Workload="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.685850 containerd[1534]: 2025-12-12 18:38:27.669 [INFO][2847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e36e64b6-fa37-4cf3-a1ce-699077993940", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-kx4r4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1a1a78dbd36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:27.685850 containerd[1534]: 2025-12-12 18:38:27.670 [INFO][2847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.66/32] ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.685941 containerd[1534]: 2025-12-12 18:38:27.670 [INFO][2847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a1a78dbd36 ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.685941 containerd[1534]: 2025-12-12 18:38:27.673 [INFO][2847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.686003 containerd[1534]: 2025-12-12 18:38:27.674 [INFO][2847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e36e64b6-fa37-4cf3-a1ce-699077993940", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da", Pod:"nginx-deployment-7fcdb87857-kx4r4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1a1a78dbd36", MAC:"52:5c:d5:df:fd:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:27.686070 containerd[1534]: 2025-12-12 18:38:27.680 [INFO][2847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" Namespace="default" Pod="nginx-deployment-7fcdb87857-kx4r4" WorkloadEndpoint="10.0.0.71-k8s-nginx--deployment--7fcdb87857--kx4r4-eth0" Dec 12 18:38:27.715396 containerd[1534]: time="2025-12-12T18:38:27.715348404Z" level=info msg="connecting to shim 198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da" address="unix:///run/containerd/s/0a6f434f206628d5d334825a1be7284ba0682ba15fa7746afb40c9bda7509ae6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:27.753378 systemd[1]: Started cri-containerd-198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da.scope - libcontainer container 198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da. Dec 12 18:38:27.768521 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:38:27.807887 containerd[1534]: time="2025-12-12T18:38:27.807568732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-kx4r4,Uid:e36e64b6-fa37-4cf3-a1ce-699077993940,Namespace:default,Attempt:0,} returns sandbox id \"198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da\"" Dec 12 18:38:27.812923 containerd[1534]: time="2025-12-12T18:38:27.812904360Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 12 18:38:27.989963 kubelet[1913]: E1212 18:38:27.989910 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:28.659831 kubelet[1913]: E1212 18:38:28.659800 1913 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:28.991053 kubelet[1913]: E1212 18:38:28.990897 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:29.185664 systemd-networkd[1439]: cali1a1a78dbd36: Gained IPv6LL Dec 12 18:38:29.991740 kubelet[1913]: E1212 18:38:29.991707 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:30.515671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695213468.mount: Deactivated successfully. Dec 12 18:38:30.992675 kubelet[1913]: E1212 18:38:30.992636 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:31.993513 kubelet[1913]: E1212 18:38:31.993440 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:32.122556 containerd[1534]: time="2025-12-12T18:38:32.122488692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:32.123376 containerd[1534]: time="2025-12-12T18:38:32.123338160Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73312336" Dec 12 18:38:32.124721 containerd[1534]: time="2025-12-12T18:38:32.124692478Z" level=info msg="ImageCreate event name:\"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:32.127348 containerd[1534]: time="2025-12-12T18:38:32.127302886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:32.128390 containerd[1534]: time="2025-12-12T18:38:32.128341854Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\", size \"73312214\" in 4.31533686s" Dec 12 18:38:32.128390 containerd[1534]: time="2025-12-12T18:38:32.128385451Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\"" Dec 12 18:38:32.130616 containerd[1534]: time="2025-12-12T18:38:32.130583244Z" level=info msg="CreateContainer within sandbox \"198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 12 18:38:32.140929 containerd[1534]: time="2025-12-12T18:38:32.140859239Z" level=info msg="Container bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:32.144426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369873793.mount: Deactivated successfully. Dec 12 18:38:32.147321 containerd[1534]: time="2025-12-12T18:38:32.147275236Z" level=info msg="CreateContainer within sandbox \"198f08ed98a72a498f06b61f7fc99a9c8269e81856e72484ba1688f1a81d46da\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70\"" Dec 12 18:38:32.147961 containerd[1534]: time="2025-12-12T18:38:32.147934594Z" level=info msg="StartContainer for \"bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70\"" Dec 12 18:38:32.148691 containerd[1534]: time="2025-12-12T18:38:32.148672108Z" level=info msg="connecting to shim bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70" address="unix:///run/containerd/s/0a6f434f206628d5d334825a1be7284ba0682ba15fa7746afb40c9bda7509ae6" protocol=ttrpc version=3 Dec 12 18:38:32.202368 systemd[1]: Started cri-containerd-bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70.scope - libcontainer container bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70. Dec 12 18:38:32.232442 containerd[1534]: time="2025-12-12T18:38:32.232390081Z" level=info msg="StartContainer for \"bcf691a4afc7f98fc4788a49c9a9dbd2a7d41db23086f7e1d337fe140ecd9e70\" returns successfully" Dec 12 18:38:32.993890 kubelet[1913]: E1212 18:38:32.993840 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:33.994717 kubelet[1913]: E1212 18:38:33.994663 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:34.995033 kubelet[1913]: E1212 18:38:34.994984 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:35.995836 kubelet[1913]: E1212 18:38:35.995752 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:36.361146 kubelet[1913]: I1212 18:38:36.361002 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-kx4r4" podStartSLOduration=18.044083303 podStartE2EDuration="22.360984698s" podCreationTimestamp="2025-12-12 18:38:14 +0000 UTC" firstStartedPulling="2025-12-12 18:38:27.812458843 +0000 UTC m=+31.388071726" lastFinishedPulling="2025-12-12 18:38:32.129360239 +0000 UTC m=+35.704973121" observedRunningTime="2025-12-12 18:38:32.67867586 +0000 UTC m=+36.254288742" watchObservedRunningTime="2025-12-12 18:38:36.360984698 +0000 UTC m=+39.936597580" Dec 12 18:38:36.368193 systemd[1]: Created slice kubepods-besteffort-podd528a28e_2f14_4c3f_a517_d9d32f98e32f.slice - libcontainer container kubepods-besteffort-podd528a28e_2f14_4c3f_a517_d9d32f98e32f.slice. Dec 12 18:38:36.402520 kubelet[1913]: I1212 18:38:36.402454 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d528a28e-2f14-4c3f-a517-d9d32f98e32f-data\") pod \"nfs-server-provisioner-0\" (UID: \"d528a28e-2f14-4c3f-a517-d9d32f98e32f\") " pod="default/nfs-server-provisioner-0" Dec 12 18:38:36.402520 kubelet[1913]: I1212 18:38:36.402507 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29q5t\" (UniqueName: \"kubernetes.io/projected/d528a28e-2f14-4c3f-a517-d9d32f98e32f-kube-api-access-29q5t\") pod \"nfs-server-provisioner-0\" (UID: \"d528a28e-2f14-4c3f-a517-d9d32f98e32f\") " pod="default/nfs-server-provisioner-0" Dec 12 18:38:36.672326 containerd[1534]: time="2025-12-12T18:38:36.672271434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d528a28e-2f14-4c3f-a517-d9d32f98e32f,Namespace:default,Attempt:0,}" Dec 12 18:38:36.775816 systemd-networkd[1439]: cali60e51b789ff: Link UP Dec 12 18:38:36.776515 systemd-networkd[1439]: cali60e51b789ff: Gained carrier Dec 12 18:38:36.790787 containerd[1534]: 2025-12-12 18:38:36.708 [INFO][3074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d528a28e-2f14-4c3f-a517-d9d32f98e32f 1322 0 2025-12-12 18:38:36 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.71 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-" Dec 12 18:38:36.790787 containerd[1534]: 2025-12-12 18:38:36.708 [INFO][3074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.790787 containerd[1534]: 2025-12-12 18:38:36.738 [INFO][3089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" HandleID="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.738 [INFO][3089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" HandleID="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"nfs-server-provisioner-0", "timestamp":"2025-12-12 18:38:36.738664452 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.738 [INFO][3089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.739 [INFO][3089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.740 [INFO][3089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.745 [INFO][3089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" host="10.0.0.71" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.750 [INFO][3089] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.71" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.754 [INFO][3089] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.755 [INFO][3089] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.757 [INFO][3089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:36.791055 containerd[1534]: 2025-12-12 18:38:36.757 [INFO][3089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" host="10.0.0.71" Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.759 [INFO][3089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2 Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.763 [INFO][3089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" host="10.0.0.71" Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.770 [INFO][3089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.77.67/26] block=192.168.77.64/26 handle="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" host="10.0.0.71" Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.770 [INFO][3089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.67/26] handle="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" host="10.0.0.71" Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.770 [INFO][3089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:38:36.791373 containerd[1534]: 2025-12-12 18:38:36.770 [INFO][3089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.77.67/26] IPv6=[] ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" HandleID="k8s-pod-network.9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Workload="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.791536 containerd[1534]: 2025-12-12 18:38:36.773 [INFO][3074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d528a28e-2f14-4c3f-a517-d9d32f98e32f", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:36.791536 containerd[1534]: 2025-12-12 18:38:36.774 [INFO][3074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.67/32] ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.791536 containerd[1534]: 2025-12-12 18:38:36.774 [INFO][3074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.791536 containerd[1534]: 2025-12-12 18:38:36.776 [INFO][3074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.791808 containerd[1534]: 2025-12-12 18:38:36.776 [INFO][3074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d528a28e-2f14-4c3f-a517-d9d32f98e32f", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:ee:61:33:0b:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:36.791808 containerd[1534]: 2025-12-12 18:38:36.786 [INFO][3074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.71-k8s-nfs--server--provisioner--0-eth0" Dec 12 18:38:36.883761 containerd[1534]: time="2025-12-12T18:38:36.883708638Z" level=info msg="connecting to shim 9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2" address="unix:///run/containerd/s/5f66cbc74599670f26f89cf7016673c2094fa96ac652c98376de6ae0f2ef9c72" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:36.914341 systemd[1]: Started cri-containerd-9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2.scope - libcontainer container 9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2. Dec 12 18:38:36.927289 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:38:36.957041 kubelet[1913]: E1212 18:38:36.956993 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:36.958332 containerd[1534]: time="2025-12-12T18:38:36.958291495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d528a28e-2f14-4c3f-a517-d9d32f98e32f,Namespace:default,Attempt:0,} returns sandbox id \"9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2\"" Dec 12 18:38:36.959795 containerd[1534]: time="2025-12-12T18:38:36.959769610Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 12 18:38:36.996542 kubelet[1913]: E1212 18:38:36.996497 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:37.997314 kubelet[1913]: E1212 18:38:37.997253 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:38.339070 systemd-networkd[1439]: cali60e51b789ff: Gained IPv6LL Dec 12 18:38:38.925234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235383298.mount: Deactivated successfully. Dec 12 18:38:38.997671 kubelet[1913]: E1212 18:38:38.997628 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:39.999207 kubelet[1913]: E1212 18:38:39.998083 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:40.998329 kubelet[1913]: E1212 18:38:40.998257 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:41.051804 containerd[1534]: time="2025-12-12T18:38:41.051733568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:41.052615 containerd[1534]: time="2025-12-12T18:38:41.052533026Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 12 18:38:41.053772 containerd[1534]: time="2025-12-12T18:38:41.053742627Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:41.059214 containerd[1534]: time="2025-12-12T18:38:41.059018429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:41.060207 containerd[1534]: time="2025-12-12T18:38:41.060128014Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.100328044s" Dec 12 18:38:41.060270 containerd[1534]: time="2025-12-12T18:38:41.060209303Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 12 18:38:41.061136 containerd[1534]: time="2025-12-12T18:38:41.061093065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:38:41.062251 containerd[1534]: time="2025-12-12T18:38:41.062219884Z" level=info msg="CreateContainer within sandbox \"9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 12 18:38:41.073245 containerd[1534]: time="2025-12-12T18:38:41.073203795Z" level=info msg="Container 23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:41.087043 containerd[1534]: time="2025-12-12T18:38:41.086991613Z" level=info msg="CreateContainer within sandbox \"9a9d31a856299aa7cdcc01e6cf98630f3c640e828694419afa9a210869d16fd2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14\"" Dec 12 18:38:41.087472 containerd[1534]: time="2025-12-12T18:38:41.087444220Z" level=info msg="StartContainer for \"23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14\"" Dec 12 18:38:41.088330 containerd[1534]: time="2025-12-12T18:38:41.088308144Z" level=info msg="connecting to shim 23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14" address="unix:///run/containerd/s/5f66cbc74599670f26f89cf7016673c2094fa96ac652c98376de6ae0f2ef9c72" protocol=ttrpc version=3 Dec 12 18:38:41.108335 systemd[1]: Started cri-containerd-23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14.scope - libcontainer container 23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14. Dec 12 18:38:41.141227 containerd[1534]: time="2025-12-12T18:38:41.141195104Z" level=info msg="StartContainer for \"23740236c1e325ff2ea9e1dc0cd2bb55a45e927c0d4032be82fd51de14b7ff14\" returns successfully" Dec 12 18:38:41.385546 containerd[1534]: time="2025-12-12T18:38:41.385409153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:41.406787 containerd[1534]: time="2025-12-12T18:38:41.406734585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:38:41.406876 containerd[1534]: time="2025-12-12T18:38:41.406799462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:38:41.406984 kubelet[1913]: E1212 18:38:41.406941 1913 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:41.407411 kubelet[1913]: E1212 18:38:41.406994 1913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:41.407411 kubelet[1913]: E1212 18:38:41.407131 1913 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddj5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:41.408926 containerd[1534]: time="2025-12-12T18:38:41.408902314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:38:41.699188 kubelet[1913]: I1212 18:38:41.699089 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.597670173 podStartE2EDuration="5.699071425s" podCreationTimestamp="2025-12-12 18:38:36 +0000 UTC" firstStartedPulling="2025-12-12 18:38:36.959544465 +0000 UTC m=+40.535157347" lastFinishedPulling="2025-12-12 18:38:41.060945697 +0000 UTC m=+44.636558599" observedRunningTime="2025-12-12 18:38:41.699025525 +0000 UTC m=+45.274638417" watchObservedRunningTime="2025-12-12 18:38:41.699071425 +0000 UTC m=+45.274684297" Dec 12 18:38:41.766725 containerd[1534]: time="2025-12-12T18:38:41.766657239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:41.768023 containerd[1534]: time="2025-12-12T18:38:41.767946676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:38:41.768250 containerd[1534]: time="2025-12-12T18:38:41.768026744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:38:41.768290 kubelet[1913]: E1212 18:38:41.768217 1913 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:41.768290 kubelet[1913]: E1212 18:38:41.768276 1913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:41.768461 kubelet[1913]: E1212 18:38:41.768410 1913 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddj5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ddqbs_calico-system(71e9495c-ad4c-46cf-aca9-046d6f589532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:41.769703 kubelet[1913]: E1212 18:38:41.769643 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:41.998711 kubelet[1913]: E1212 18:38:41.998562 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:42.998820 kubelet[1913]: E1212 18:38:42.998734 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:43.999493 kubelet[1913]: E1212 18:38:43.999432 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:44.999873 kubelet[1913]: E1212 18:38:44.999809 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:46.000685 kubelet[1913]: E1212 18:38:46.000620 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:47.001612 kubelet[1913]: E1212 18:38:47.001557 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:48.002365 kubelet[1913]: E1212 18:38:48.002320 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:49.003432 kubelet[1913]: E1212 18:38:49.003369 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:50.004067 kubelet[1913]: E1212 18:38:50.003987 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:51.004514 kubelet[1913]: E1212 18:38:51.004463 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:51.256447 systemd[1]: Created slice kubepods-besteffort-pod63a088ab_36b7_4093_a120_2061a3e029b8.slice - libcontainer container kubepods-besteffort-pod63a088ab_36b7_4093_a120_2061a3e029b8.slice. Dec 12 18:38:51.292547 kubelet[1913]: I1212 18:38:51.292473 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54a4a513-6ac9-4888-950c-edce68c70441\" (UniqueName: \"kubernetes.io/nfs/63a088ab-36b7-4093-a120-2061a3e029b8-pvc-54a4a513-6ac9-4888-950c-edce68c70441\") pod \"test-pod-1\" (UID: \"63a088ab-36b7-4093-a120-2061a3e029b8\") " pod="default/test-pod-1" Dec 12 18:38:51.292547 kubelet[1913]: I1212 18:38:51.292532 1913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kq9w\" (UniqueName: \"kubernetes.io/projected/63a088ab-36b7-4093-a120-2061a3e029b8-kube-api-access-2kq9w\") pod \"test-pod-1\" (UID: \"63a088ab-36b7-4093-a120-2061a3e029b8\") " pod="default/test-pod-1" Dec 12 18:38:51.431209 kernel: netfs: FS-Cache loaded Dec 12 18:38:51.499475 kernel: RPC: Registered named UNIX socket transport module. Dec 12 18:38:51.499618 kernel: RPC: Registered udp transport module. Dec 12 18:38:51.499657 kernel: RPC: Registered tcp transport module. Dec 12 18:38:51.500387 kernel: RPC: Registered tcp-with-tls transport module. Dec 12 18:38:51.501392 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 12 18:38:51.748755 kernel: NFS: Registering the id_resolver key type Dec 12 18:38:51.748911 kernel: Key type id_resolver registered Dec 12 18:38:51.748935 kernel: Key type id_legacy registered Dec 12 18:38:51.772355 nfsidmap[3281]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 12 18:38:51.772849 nfsidmap[3281]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 12 18:38:51.777011 nfsidmap[3284]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Dec 12 18:38:51.777204 nfsidmap[3284]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 12 18:38:51.783763 nfsrahead[3288]: setting /var/lib/kubelet/pods/63a088ab-36b7-4093-a120-2061a3e029b8/volumes/kubernetes.io~nfs/pvc-54a4a513-6ac9-4888-950c-edce68c70441 readahead to 128 Dec 12 18:38:51.860220 containerd[1534]: time="2025-12-12T18:38:51.859653837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:63a088ab-36b7-4093-a120-2061a3e029b8,Namespace:default,Attempt:0,}" Dec 12 18:38:51.958376 systemd-networkd[1439]: cali5ec59c6bf6e: Link UP Dec 12 18:38:51.958947 systemd-networkd[1439]: cali5ec59c6bf6e: Gained carrier Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.896 [INFO][3290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.71-k8s-test--pod--1-eth0 default 63a088ab-36b7-4093-a120-2061a3e029b8 1416 0 2025-12-12 18:38:36 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.71 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.896 [INFO][3290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.922 [INFO][3303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" HandleID="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Workload="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.922 [INFO][3303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" HandleID="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Workload="10.0.0.71-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.71", "pod":"test-pod-1", "timestamp":"2025-12-12 18:38:51.922087741 +0000 UTC"}, Hostname:"10.0.0.71", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.922 [INFO][3303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.922 [INFO][3303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.922 [INFO][3303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.71' Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.928 [INFO][3303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.933 [INFO][3303] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.937 [INFO][3303] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.938 [INFO][3303] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.940 [INFO][3303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.940 [INFO][3303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.941 [INFO][3303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912 Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.945 [INFO][3303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.952 [INFO][3303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.77.68/26] block=192.168.77.64/26 handle="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.953 [INFO][3303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.68/26] handle="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" host="10.0.0.71" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.953 [INFO][3303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.953 [INFO][3303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.77.68/26] IPv6=[] ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" HandleID="k8s-pod-network.22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Workload="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.973439 containerd[1534]: 2025-12-12 18:38:51.956 [INFO][3290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"63a088ab-36b7-4093-a120-2061a3e029b8", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:51.974262 containerd[1534]: 2025-12-12 18:38:51.956 [INFO][3290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.68/32] ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.974262 containerd[1534]: 2025-12-12 18:38:51.956 [INFO][3290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.974262 containerd[1534]: 2025-12-12 18:38:51.960 [INFO][3290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:51.974262 containerd[1534]: 2025-12-12 18:38:51.960 [INFO][3290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.71-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"63a088ab-36b7-4093-a120-2061a3e029b8", ResourceVersion:"1416", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.71", ContainerID:"22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:fb:bc:b1:d9:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:38:51.974262 containerd[1534]: 2025-12-12 18:38:51.966 [INFO][3290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.71-k8s-test--pod--1-eth0" Dec 12 18:38:52.004924 kubelet[1913]: E1212 18:38:52.004822 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:52.085825 containerd[1534]: time="2025-12-12T18:38:52.085781528Z" level=info msg="connecting to shim 22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912" address="unix:///run/containerd/s/55ae632712bc20262b60cf3a4564bfe0c6f17a53fb9366c3201f63df5b8841cf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:52.111308 systemd[1]: Started cri-containerd-22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912.scope - libcontainer container 22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912. Dec 12 18:38:52.123816 systemd-resolved[1397]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:38:52.152175 containerd[1534]: time="2025-12-12T18:38:52.152123635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:63a088ab-36b7-4093-a120-2061a3e029b8,Namespace:default,Attempt:0,} returns sandbox id \"22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912\"" Dec 12 18:38:52.153063 containerd[1534]: time="2025-12-12T18:38:52.153044667Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 12 18:38:52.535380 containerd[1534]: time="2025-12-12T18:38:52.535329493Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:52.536133 containerd[1534]: time="2025-12-12T18:38:52.536087961Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 12 18:38:52.538543 containerd[1534]: time="2025-12-12T18:38:52.538504547Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:3db8be616067ff6bd4534d63c0a1427862e285068488ddccf319982871e49aac\", size \"73312214\" in 385.425824ms" Dec 12 18:38:52.538543 containerd[1534]: time="2025-12-12T18:38:52.538535747Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:22a868706770293edead78aaec092d4290435fc539093fbdbe8deb2c3310eeeb\"" Dec 12 18:38:52.540348 containerd[1534]: time="2025-12-12T18:38:52.540320110Z" level=info msg="CreateContainer within sandbox \"22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 12 18:38:52.548809 containerd[1534]: time="2025-12-12T18:38:52.548760363Z" level=info msg="Container 452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:52.553153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774820054.mount: Deactivated successfully. Dec 12 18:38:52.559449 containerd[1534]: time="2025-12-12T18:38:52.559407235Z" level=info msg="CreateContainer within sandbox \"22707c0e7a99ff8e9d341bb2742c63797448de59beaea75630717fa2b02d3912\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210\"" Dec 12 18:38:52.560024 containerd[1534]: time="2025-12-12T18:38:52.559977448Z" level=info msg="StartContainer for \"452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210\"" Dec 12 18:38:52.561284 containerd[1534]: time="2025-12-12T18:38:52.561255661Z" level=info msg="connecting to shim 452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210" address="unix:///run/containerd/s/55ae632712bc20262b60cf3a4564bfe0c6f17a53fb9366c3201f63df5b8841cf" protocol=ttrpc version=3 Dec 12 18:38:52.588410 systemd[1]: Started cri-containerd-452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210.scope - libcontainer container 452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210. Dec 12 18:38:52.710057 containerd[1534]: time="2025-12-12T18:38:52.710018598Z" level=info msg="StartContainer for \"452bbe2e501b433f9f0eae01c04d87d1c3b14afde95671a76b9ae40f71d4d210\" returns successfully" Dec 12 18:38:53.005812 kubelet[1913]: E1212 18:38:53.005736 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:53.441403 systemd-networkd[1439]: cali5ec59c6bf6e: Gained IPv6LL Dec 12 18:38:53.585458 kubelet[1913]: E1212 18:38:53.585408 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ddqbs" podUID="71e9495c-ad4c-46cf-aca9-046d6f589532" Dec 12 18:38:53.721608 kubelet[1913]: I1212 18:38:53.721448 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.335047814 podStartE2EDuration="17.721430921s" podCreationTimestamp="2025-12-12 18:38:36 +0000 UTC" firstStartedPulling="2025-12-12 18:38:52.152802658 +0000 UTC m=+55.728415540" lastFinishedPulling="2025-12-12 18:38:52.539185765 +0000 UTC m=+56.114798647" observedRunningTime="2025-12-12 18:38:53.721277023 +0000 UTC m=+57.296889905" watchObservedRunningTime="2025-12-12 18:38:53.721430921 +0000 UTC m=+57.297043793" Dec 12 18:38:54.006068 kubelet[1913]: E1212 18:38:54.005926 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:55.006822 kubelet[1913]: E1212 18:38:55.006741 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:56.006993 kubelet[1913]: E1212 18:38:56.006916 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:56.956782 kubelet[1913]: E1212 18:38:56.956727 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:57.008035 kubelet[1913]: E1212 18:38:57.007998 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:58.008808 kubelet[1913]: E1212 18:38:58.008750 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 12 18:38:59.009576 kubelet[1913]: E1212 18:38:59.009495 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"