Jul 7 06:15:18.796601 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:15:18.796637 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:18.796646 kernel: BIOS-provided physical RAM map: Jul 7 06:15:18.796653 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 06:15:18.796661 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 06:15:18.796668 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 06:15:18.796676 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 7 06:15:18.796685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 7 06:15:18.796693 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 06:15:18.796700 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 06:15:18.796708 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:15:18.796716 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 06:15:18.796722 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:15:18.796729 kernel: NX (Execute Disable) protection: active Jul 7 06:15:18.796739 kernel: APIC: Static calls initialized Jul 7 06:15:18.796746 kernel: SMBIOS 2.8 present. Jul 7 06:15:18.796840 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 7 06:15:18.796847 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:15:18.796854 kernel: Hypervisor detected: KVM Jul 7 06:15:18.796861 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:15:18.796868 kernel: kvm-clock: using sched offset of 3260708010 cycles Jul 7 06:15:18.796876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:15:18.796883 kernel: tsc: Detected 2794.746 MHz processor Jul 7 06:15:18.796891 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:15:18.796901 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:15:18.796909 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 7 06:15:18.796916 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 06:15:18.796923 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:15:18.796930 kernel: Using GB pages for direct mapping Jul 7 06:15:18.796938 kernel: ACPI: Early table checksum verification disabled Jul 7 06:15:18.796945 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 7 06:15:18.796952 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.796962 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.796969 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.796976 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 7 06:15:18.796983 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.796990 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.796997 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.797004 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:15:18.797012 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 7 06:15:18.797031 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 7 06:15:18.797038 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 7 06:15:18.797046 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 7 06:15:18.797053 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 7 06:15:18.797060 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 7 06:15:18.797068 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 7 06:15:18.797077 kernel: No NUMA configuration found Jul 7 06:15:18.797085 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 7 06:15:18.797092 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 7 06:15:18.797100 kernel: Zone ranges: Jul 7 06:15:18.797107 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:15:18.797115 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 7 06:15:18.797122 kernel: Normal empty Jul 7 06:15:18.797129 kernel: Device empty Jul 7 06:15:18.797137 kernel: Movable zone start for each node Jul 7 06:15:18.797144 kernel: Early memory node ranges Jul 7 06:15:18.797153 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 06:15:18.797160 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 7 06:15:18.797168 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 7 06:15:18.797175 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:15:18.797183 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:15:18.797190 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 7 06:15:18.797197 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:15:18.797205 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:15:18.797212 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:15:18.797222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:15:18.797229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:15:18.797236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:15:18.797244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:15:18.797251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:15:18.797259 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:15:18.797266 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:15:18.797273 kernel: TSC deadline timer available Jul 7 06:15:18.797281 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:15:18.797290 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:15:18.797297 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:15:18.797304 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:15:18.797311 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:15:18.797319 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:15:18.797326 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:15:18.797333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:15:18.797340 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:15:18.797348 kernel: kvm-guest: setup PV sched yield Jul 7 06:15:18.797355 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 06:15:18.797364 kernel: Booting paravirtualized kernel on KVM Jul 7 06:15:18.797372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:15:18.797379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:15:18.797387 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:15:18.797394 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:15:18.797401 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:15:18.797408 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:15:18.797416 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:15:18.797424 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:18.797434 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:15:18.797441 kernel: random: crng init done Jul 7 06:15:18.797449 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:15:18.797456 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:15:18.797463 kernel: Fallback order for Node 0: 0 Jul 7 06:15:18.797471 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 7 06:15:18.797478 kernel: Policy zone: DMA32 Jul 7 06:15:18.797485 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:15:18.797495 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:15:18.797502 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:15:18.797510 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:15:18.797517 kernel: Dynamic Preempt: voluntary Jul 7 06:15:18.797524 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:15:18.797532 kernel: rcu: RCU event tracing is enabled. Jul 7 06:15:18.797540 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:15:18.797547 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:15:18.797555 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:15:18.797564 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:15:18.797571 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:15:18.797579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:15:18.797586 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:15:18.797595 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:15:18.797606 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:15:18.797617 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:15:18.797627 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:15:18.797643 kernel: Console: colour VGA+ 80x25 Jul 7 06:15:18.797650 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:15:18.797658 kernel: ACPI: Core revision 20240827 Jul 7 06:15:18.797666 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:15:18.797675 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:15:18.797683 kernel: x2apic enabled Jul 7 06:15:18.797690 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:15:18.797698 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:15:18.797706 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:15:18.797716 kernel: kvm-guest: setup PV IPIs Jul 7 06:15:18.797723 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:15:18.797731 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 06:15:18.797739 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 7 06:15:18.797747 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:15:18.797769 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:15:18.797777 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:15:18.797785 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:15:18.797793 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:15:18.797803 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:15:18.797810 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:15:18.797818 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:15:18.797826 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:15:18.797834 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:15:18.797842 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:15:18.797850 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:15:18.797858 kernel: x86/bugs: return thunk changed Jul 7 06:15:18.797867 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:15:18.797875 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:15:18.797883 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:15:18.797890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:15:18.797898 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:15:18.797906 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:15:18.797913 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:15:18.797921 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:15:18.797929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:15:18.797938 kernel: landlock: Up and running. Jul 7 06:15:18.797946 kernel: SELinux: Initializing. Jul 7 06:15:18.797953 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:15:18.797961 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:15:18.797969 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:15:18.797977 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:15:18.797984 kernel: ... version: 0 Jul 7 06:15:18.797992 kernel: ... bit width: 48 Jul 7 06:15:18.797999 kernel: ... generic registers: 6 Jul 7 06:15:18.798009 kernel: ... value mask: 0000ffffffffffff Jul 7 06:15:18.798016 kernel: ... max period: 00007fffffffffff Jul 7 06:15:18.798031 kernel: ... fixed-purpose events: 0 Jul 7 06:15:18.798038 kernel: ... event mask: 000000000000003f Jul 7 06:15:18.798046 kernel: signal: max sigframe size: 1776 Jul 7 06:15:18.798054 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:15:18.798061 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:15:18.798069 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:15:18.798077 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:15:18.798087 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:15:18.798094 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:15:18.798102 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:15:18.798110 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 7 06:15:18.798118 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 136904K reserved, 0K cma-reserved) Jul 7 06:15:18.798126 kernel: devtmpfs: initialized Jul 7 06:15:18.798133 kernel: x86/mm: Memory block size: 128MB Jul 7 06:15:18.798141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:15:18.798149 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:15:18.798158 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:15:18.798166 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:15:18.798174 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:15:18.798181 kernel: audit: type=2000 audit(1751868915.941:1): state=initialized audit_enabled=0 res=1 Jul 7 06:15:18.798189 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:15:18.798197 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:15:18.798204 kernel: cpuidle: using governor menu Jul 7 06:15:18.798212 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:15:18.798219 kernel: dca service started, version 1.12.1 Jul 7 06:15:18.798229 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 06:15:18.798237 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 06:15:18.798244 kernel: PCI: Using configuration type 1 for base access Jul 7 06:15:18.798252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:15:18.798260 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:15:18.798268 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:15:18.798275 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:15:18.798283 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:15:18.798290 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:15:18.798300 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:15:18.798308 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:15:18.798315 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:15:18.798323 kernel: ACPI: Interpreter enabled Jul 7 06:15:18.798330 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:15:18.798338 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:15:18.798346 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:15:18.798353 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:15:18.798361 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:15:18.798370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:15:18.798560 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:15:18.798694 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:15:18.798833 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:15:18.798844 kernel: PCI host bridge to bus 0000:00 Jul 7 06:15:18.798987 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:15:18.799104 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:15:18.799224 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:15:18.799330 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 7 06:15:18.799434 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 06:15:18.799538 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 7 06:15:18.799656 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:15:18.799806 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:15:18.799935 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:15:18.800060 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 7 06:15:18.800175 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 7 06:15:18.800288 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 7 06:15:18.800401 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:15:18.800578 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:15:18.800747 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 7 06:15:18.800892 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 7 06:15:18.801006 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 7 06:15:18.801139 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:15:18.801255 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 7 06:15:18.801369 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 7 06:15:18.801483 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 7 06:15:18.801609 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:15:18.801878 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 7 06:15:18.802066 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 7 06:15:18.802222 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 7 06:15:18.802352 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 7 06:15:18.802477 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:15:18.802595 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:15:18.802734 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:15:18.802877 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 7 06:15:18.802991 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 7 06:15:18.803178 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:15:18.803326 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 06:15:18.803338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:15:18.803346 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:15:18.803354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:15:18.803370 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:15:18.803378 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:15:18.803386 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:15:18.803394 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:15:18.803401 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:15:18.803409 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:15:18.803417 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:15:18.803424 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:15:18.803432 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:15:18.803442 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:15:18.803449 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:15:18.803457 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:15:18.803465 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:15:18.803472 kernel: iommu: Default domain type: Translated Jul 7 06:15:18.803480 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:15:18.803487 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:15:18.803495 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:15:18.803503 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 06:15:18.803513 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 7 06:15:18.803643 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:15:18.803794 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:15:18.803913 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:15:18.803924 kernel: vgaarb: loaded Jul 7 06:15:18.803932 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:15:18.803940 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:15:18.803948 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:15:18.803959 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:15:18.803967 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:15:18.803975 kernel: pnp: PnP ACPI init Jul 7 06:15:18.804109 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 06:15:18.804121 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:15:18.804129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:15:18.804137 kernel: NET: Registered PF_INET protocol family Jul 7 06:15:18.804145 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:15:18.804155 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:15:18.804163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:15:18.804171 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:15:18.804179 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:15:18.804187 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:15:18.804194 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:15:18.804202 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:15:18.804210 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:15:18.804218 kernel: NET: Registered PF_XDP protocol family Jul 7 06:15:18.804327 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:15:18.804432 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:15:18.804536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:15:18.804652 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 7 06:15:18.804798 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 06:15:18.804937 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 7 06:15:18.804948 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:15:18.804956 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 7 06:15:18.804968 kernel: Initialise system trusted keyrings Jul 7 06:15:18.804976 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:15:18.804983 kernel: Key type asymmetric registered Jul 7 06:15:18.804991 kernel: Asymmetric key parser 'x509' registered Jul 7 06:15:18.804999 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:15:18.805006 kernel: io scheduler mq-deadline registered Jul 7 06:15:18.805014 kernel: io scheduler kyber registered Jul 7 06:15:18.805030 kernel: io scheduler bfq registered Jul 7 06:15:18.805038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:15:18.805051 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:15:18.805059 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:15:18.805066 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:15:18.805074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:15:18.805082 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:15:18.805090 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:15:18.805098 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:15:18.805105 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:15:18.805236 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:15:18.805349 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:15:18.805360 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:15:18.805466 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:15:18 UTC (1751868918) Jul 7 06:15:18.805572 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 06:15:18.805583 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:15:18.805591 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:15:18.805601 kernel: Segment Routing with IPv6 Jul 7 06:15:18.805612 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:15:18.805626 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:15:18.805636 kernel: Key type dns_resolver registered Jul 7 06:15:18.805646 kernel: IPI shorthand broadcast: enabled Jul 7 06:15:18.805654 kernel: sched_clock: Marking stable (2999001660, 107811759)->(3122661566, -15848147) Jul 7 06:15:18.805662 kernel: registered taskstats version 1 Jul 7 06:15:18.805670 kernel: Loading compiled-in X.509 certificates Jul 7 06:15:18.805680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:15:18.805691 kernel: Demotion targets for Node 0: null Jul 7 06:15:18.805701 kernel: Key type .fscrypt registered Jul 7 06:15:18.805716 kernel: Key type fscrypt-provisioning registered Jul 7 06:15:18.805729 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:15:18.805740 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:15:18.805747 kernel: ima: No architecture policies found Jul 7 06:15:18.805771 kernel: clk: Disabling unused clocks Jul 7 06:15:18.805779 kernel: Warning: unable to open an initial console. Jul 7 06:15:18.805787 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:15:18.805795 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:15:18.805802 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:15:18.805813 kernel: Run /init as init process Jul 7 06:15:18.805821 kernel: with arguments: Jul 7 06:15:18.805828 kernel: /init Jul 7 06:15:18.805836 kernel: with environment: Jul 7 06:15:18.805843 kernel: HOME=/ Jul 7 06:15:18.805851 kernel: TERM=linux Jul 7 06:15:18.805858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:15:18.805867 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:15:18.805880 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:18.805899 systemd[1]: Detected virtualization kvm. Jul 7 06:15:18.805908 systemd[1]: Detected architecture x86-64. Jul 7 06:15:18.805916 systemd[1]: Running in initrd. Jul 7 06:15:18.805924 systemd[1]: No hostname configured, using default hostname. Jul 7 06:15:18.805933 systemd[1]: Hostname set to . Jul 7 06:15:18.805943 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:15:18.805951 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:15:18.805960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:18.805969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:18.805978 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:15:18.805986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:18.805995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:15:18.806006 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:15:18.806016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:15:18.806033 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:15:18.806042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:18.806050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:18.806058 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:18.806067 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:18.806075 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:18.806086 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:18.806097 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:18.806106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:18.806114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:15:18.806122 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:15:18.806131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:18.806139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:18.806148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:18.806158 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:18.806166 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:15:18.806175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:18.806183 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:15:18.806192 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:15:18.806204 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:15:18.806213 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:18.806221 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:18.806230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:18.806238 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:18.806247 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:18.806258 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:15:18.806266 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:15:18.806293 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 06:15:18.806315 systemd-journald[220]: Journal started Jul 7 06:15:18.806334 systemd-journald[220]: Runtime Journal (/run/log/journal/df8afc75a2dc4801b7a853814fff5f94) is 6M, max 48.6M, 42.5M free. Jul 7 06:15:18.802016 systemd-modules-load[222]: Inserted module 'overlay' Jul 7 06:15:18.809104 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:18.809747 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:18.843030 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:15:18.843059 kernel: Bridge firewalling registered Jul 7 06:15:18.832145 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 7 06:15:18.847893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:18.850205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:18.851515 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:15:18.856267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:15:18.860132 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:15:18.860425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:18.870398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:18.870737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:18.881497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:18.883641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:18.884872 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:18.895922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:18.896914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:15:18.921716 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:18.932357 systemd-resolved[254]: Positive Trust Anchors: Jul 7 06:15:18.932387 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:18.932426 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:18.936067 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 7 06:15:18.938280 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:18.941792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:19.033795 kernel: SCSI subsystem initialized Jul 7 06:15:19.043784 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:15:19.054787 kernel: iscsi: registered transport (tcp) Jul 7 06:15:19.076796 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:15:19.076829 kernel: QLogic iSCSI HBA Driver Jul 7 06:15:19.097870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:19.117058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:19.117453 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:19.177624 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:19.181059 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:15:19.246784 kernel: raid6: avx2x4 gen() 30416 MB/s Jul 7 06:15:19.263787 kernel: raid6: avx2x2 gen() 26979 MB/s Jul 7 06:15:19.280822 kernel: raid6: avx2x1 gen() 25483 MB/s Jul 7 06:15:19.280854 kernel: raid6: using algorithm avx2x4 gen() 30416 MB/s Jul 7 06:15:19.298943 kernel: raid6: .... xor() 8075 MB/s, rmw enabled Jul 7 06:15:19.299001 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:15:19.319799 kernel: xor: automatically using best checksumming function avx Jul 7 06:15:19.496798 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:15:19.505901 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:19.507584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:19.535949 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 7 06:15:19.541522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:19.543707 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:15:19.576048 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jul 7 06:15:19.606050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:19.609677 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:19.683417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:19.690088 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:15:19.722779 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:15:19.742780 kernel: libata version 3.00 loaded. Jul 7 06:15:19.746785 kernel: AES CTR mode by8 optimization enabled Jul 7 06:15:19.746814 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:15:19.750348 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:15:19.750372 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:15:19.759780 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:15:19.760178 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:15:19.764551 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:15:19.764735 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:15:19.764933 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:15:19.765079 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:15:19.769428 kernel: GPT:9289727 != 19775487 Jul 7 06:15:19.769497 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:15:19.769512 kernel: GPT:9289727 != 19775487 Jul 7 06:15:19.769525 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:15:19.769539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:15:19.775972 kernel: scsi host0: ahci Jul 7 06:15:19.776213 kernel: scsi host1: ahci Jul 7 06:15:19.778797 kernel: scsi host2: ahci Jul 7 06:15:19.779780 kernel: scsi host3: ahci Jul 7 06:15:19.779988 kernel: scsi host4: ahci Jul 7 06:15:19.783732 kernel: scsi host5: ahci Jul 7 06:15:19.783908 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 7 06:15:19.783920 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 7 06:15:19.787961 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 7 06:15:19.787992 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 7 06:15:19.790559 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 7 06:15:19.790585 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 7 06:15:19.811857 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:15:19.825302 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:15:19.843881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:15:19.846382 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:15:19.858613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:15:19.861849 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:15:19.864108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:19.864162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:19.867486 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:19.875306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:19.877777 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:19.883605 disk-uuid[626]: Primary Header is updated. Jul 7 06:15:19.883605 disk-uuid[626]: Secondary Entries is updated. Jul 7 06:15:19.883605 disk-uuid[626]: Secondary Header is updated. Jul 7 06:15:19.886847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:15:20.035107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:20.097028 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:15:20.097089 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:15:20.097101 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:15:20.098788 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:15:20.098869 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:15:20.099805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:15:20.100789 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:15:20.100821 kernel: ata3.00: applying bridge limits Jul 7 06:15:20.101785 kernel: ata3.00: configured for UDMA/100 Jul 7 06:15:20.103785 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:15:20.143783 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:15:20.144029 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:15:20.157978 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:15:20.622354 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:20.630385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:20.631945 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:20.633094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:20.636101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:15:20.660858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:20.896800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:15:20.896873 disk-uuid[627]: The operation has completed successfully. Jul 7 06:15:20.932008 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:15:20.932141 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:15:20.964114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:15:20.996881 sh[661]: Success Jul 7 06:15:21.016696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:15:21.016819 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:15:21.016844 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:15:21.025790 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:15:21.059926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:15:21.061909 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:15:21.082500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:15:21.089477 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:15:21.089521 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (673) Jul 7 06:15:21.091836 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:15:21.091887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:21.091898 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:15:21.097193 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:15:21.097789 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:21.099948 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:15:21.101044 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:15:21.103652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:15:21.134781 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Jul 7 06:15:21.136800 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:21.136823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:21.136834 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:15:21.144800 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:21.144955 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:15:21.146394 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:15:21.277201 ignition[745]: Ignition 2.21.0 Jul 7 06:15:21.277214 ignition[745]: Stage: fetch-offline Jul 7 06:15:21.277260 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:21.277271 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:21.277375 ignition[745]: parsed url from cmdline: "" Jul 7 06:15:21.277378 ignition[745]: no config URL provided Jul 7 06:15:21.277383 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:15:21.277392 ignition[745]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:15:21.277414 ignition[745]: op(1): [started] loading QEMU firmware config module Jul 7 06:15:21.277419 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:15:21.289145 ignition[745]: op(1): [finished] loading QEMU firmware config module Jul 7 06:15:21.291493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:21.296942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:21.330531 ignition[745]: parsing config with SHA512: c6ecff47fdf90a2a188e8223527f2eededc61842e49aa207dd2d8e10e095646861472b31dcb52efc73c0effba8f024863f63149c02169f41dfd284e95a64a460 Jul 7 06:15:21.335598 unknown[745]: fetched base config from "system" Jul 7 06:15:21.335612 unknown[745]: fetched user config from "qemu" Jul 7 06:15:21.336038 ignition[745]: fetch-offline: fetch-offline passed Jul 7 06:15:21.336098 ignition[745]: Ignition finished successfully Jul 7 06:15:21.341354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:21.341971 systemd-networkd[851]: lo: Link UP Jul 7 06:15:21.341975 systemd-networkd[851]: lo: Gained carrier Jul 7 06:15:21.343491 systemd-networkd[851]: Enumeration completed Jul 7 06:15:21.343570 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:21.343907 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:21.343912 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:21.344744 systemd-networkd[851]: eth0: Link UP Jul 7 06:15:21.344748 systemd-networkd[851]: eth0: Gained carrier Jul 7 06:15:21.344790 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:21.345030 systemd[1]: Reached target network.target - Network. Jul 7 06:15:21.345888 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:15:21.346675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:15:21.358830 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:15:21.389102 ignition[855]: Ignition 2.21.0 Jul 7 06:15:21.389114 ignition[855]: Stage: kargs Jul 7 06:15:21.389290 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:21.389300 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:21.392865 ignition[855]: kargs: kargs passed Jul 7 06:15:21.392964 ignition[855]: Ignition finished successfully Jul 7 06:15:21.398374 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:15:21.401875 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:15:21.442670 ignition[865]: Ignition 2.21.0 Jul 7 06:15:21.442684 ignition[865]: Stage: disks Jul 7 06:15:21.442849 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:21.442860 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:21.443613 ignition[865]: disks: disks passed Jul 7 06:15:21.443665 ignition[865]: Ignition finished successfully Jul 7 06:15:21.448693 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:15:21.450796 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:21.450881 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:15:21.454268 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:21.456575 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:21.457023 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:21.458367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:15:21.497468 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:15:21.505202 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:15:21.506242 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:15:21.623783 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:15:21.624384 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:15:21.625785 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:21.628449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:21.630385 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:15:21.631445 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:15:21.631484 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:15:21.631508 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:21.645059 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:15:21.646616 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:15:21.653780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 7 06:15:21.653833 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:21.655328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:21.655373 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:15:21.660254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:21.682678 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:15:21.686877 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:15:21.691778 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:15:21.696526 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:15:21.782100 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:21.783838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:15:21.786996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:15:21.802814 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:21.816018 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:15:21.832127 ignition[998]: INFO : Ignition 2.21.0 Jul 7 06:15:21.832127 ignition[998]: INFO : Stage: mount Jul 7 06:15:21.833924 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:21.833924 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:21.836039 ignition[998]: INFO : mount: mount passed Jul 7 06:15:21.836039 ignition[998]: INFO : Ignition finished successfully Jul 7 06:15:21.840069 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:15:21.842414 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:15:22.088509 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:15:22.090318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:22.119109 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Jul 7 06:15:22.119136 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:22.119147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:22.119927 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:15:22.123588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:22.159224 ignition[1027]: INFO : Ignition 2.21.0 Jul 7 06:15:22.159224 ignition[1027]: INFO : Stage: files Jul 7 06:15:22.162094 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:22.162094 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:22.162094 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:15:22.162094 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:15:22.162094 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:15:22.168545 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:15:22.168545 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:15:22.168545 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:15:22.168545 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:15:22.168545 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:15:22.164330 unknown[1027]: wrote ssh authorized keys file for user: core Jul 7 06:15:22.200777 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:15:22.375588 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:15:22.375588 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:22.379500 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:22.391175 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:22.391175 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:22.391175 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:22.397489 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:22.397489 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:22.402046 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:15:23.092879 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:15:23.377921 systemd-networkd[851]: eth0: Gained IPv6LL Jul 7 06:15:23.399260 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:23.399260 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:15:23.403161 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:23.407493 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:23.407493 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:15:23.407493 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:15:23.411816 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:15:23.411816 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:15:23.411816 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:15:23.411816 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:15:23.432238 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:15:23.437655 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:23.439320 ignition[1027]: INFO : files: files passed Jul 7 06:15:23.439320 ignition[1027]: INFO : Ignition finished successfully Jul 7 06:15:23.444518 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:15:23.446364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:15:23.449702 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:15:23.464332 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:15:23.464483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:15:23.467950 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:15:23.471329 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:23.471329 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:23.474555 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:23.475504 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:23.477459 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:15:23.480264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:15:23.542738 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:15:23.542907 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:15:23.544379 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:15:23.547943 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:15:23.548091 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:15:23.548971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:15:23.565786 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:23.568206 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:15:23.585647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:23.585854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:23.589073 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:15:23.590173 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:15:23.590304 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:23.593639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:15:23.594847 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:15:23.595314 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:15:23.595625 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:23.596118 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:23.596429 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:23.596776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:15:23.597263 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:23.597596 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:15:23.598085 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:15:23.598392 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:15:23.598678 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:15:23.598824 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:23.616171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:23.617159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:23.617434 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:15:23.621998 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:23.624403 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:15:23.624508 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:23.627318 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:15:23.627437 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:23.629519 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:15:23.630525 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:15:23.635810 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:23.637115 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:15:23.639417 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:15:23.641929 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:15:23.642024 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:23.642933 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:15:23.643012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:23.644653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:15:23.644778 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:23.646351 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:15:23.646451 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:15:23.650399 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:15:23.655170 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:15:23.656106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:15:23.656223 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:23.660126 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:15:23.661187 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:23.667325 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:15:23.667437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:15:23.676858 ignition[1082]: INFO : Ignition 2.21.0 Jul 7 06:15:23.676858 ignition[1082]: INFO : Stage: umount Jul 7 06:15:23.678573 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:23.678573 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:15:23.678573 ignition[1082]: INFO : umount: umount passed Jul 7 06:15:23.678573 ignition[1082]: INFO : Ignition finished successfully Jul 7 06:15:23.680463 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:15:23.680602 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:15:23.682784 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:15:23.683256 systemd[1]: Stopped target network.target - Network. Jul 7 06:15:23.684235 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:15:23.684285 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:15:23.685355 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:15:23.685400 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:15:23.685669 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:15:23.685714 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:15:23.686165 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:15:23.686205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:23.686567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:15:23.687016 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:15:23.696425 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:15:23.696573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:15:23.700099 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:15:23.700366 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:15:23.700415 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:23.704671 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:23.709174 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:15:23.709303 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:15:23.713463 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:15:23.713623 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:15:23.714586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:15:23.714626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:23.715848 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:15:23.719377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:15:23.719443 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:23.719772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:15:23.719828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:23.725857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:15:23.725919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:23.726964 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:23.728082 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:15:23.741724 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:15:23.741891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:15:23.753569 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:15:23.753771 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:23.754810 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:15:23.754854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:23.756881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:15:23.756928 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:23.758777 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:15:23.758824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:23.763429 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:15:23.763478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:23.766186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:15:23.766236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:23.770626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:15:23.771160 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:15:23.771210 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:23.775290 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:15:23.775341 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:23.779873 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:15:23.779934 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:15:23.783515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:15:23.783565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:23.784546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:23.784589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:23.799499 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:15:23.799612 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:15:23.837402 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:15:23.837531 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:15:23.839802 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:15:23.840230 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:15:23.840302 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:23.842925 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:15:23.869585 systemd[1]: Switching root. Jul 7 06:15:23.897926 systemd-journald[220]: Journal stopped Jul 7 06:15:25.039688 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 06:15:25.039743 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:15:25.039772 kernel: SELinux: policy capability open_perms=1 Jul 7 06:15:25.039792 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:15:25.039808 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:15:25.039819 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:15:25.039830 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:15:25.039841 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:15:25.039862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:15:25.039874 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:15:25.039886 kernel: audit: type=1403 audit(1751868924.265:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:15:25.039902 systemd[1]: Successfully loaded SELinux policy in 50.260ms. Jul 7 06:15:25.039927 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.202ms. Jul 7 06:15:25.039939 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:25.039952 systemd[1]: Detected virtualization kvm. Jul 7 06:15:25.039964 systemd[1]: Detected architecture x86-64. Jul 7 06:15:25.039976 systemd[1]: Detected first boot. Jul 7 06:15:25.039991 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:15:25.040006 zram_generator::config[1127]: No configuration found. Jul 7 06:15:25.040029 kernel: Guest personality initialized and is inactive Jul 7 06:15:25.040043 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:15:25.040054 kernel: Initialized host personality Jul 7 06:15:25.040065 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:15:25.040078 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:15:25.040090 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:15:25.040102 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:15:25.040114 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:15:25.040125 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:25.040138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:15:25.040152 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:15:25.040164 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:15:25.040175 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:15:25.040187 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:15:25.040200 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:15:25.040212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:15:25.040224 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:15:25.040236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:25.040248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:25.040262 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:15:25.040273 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:15:25.040286 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:15:25.040299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:25.040315 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:15:25.040327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:25.040341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:25.040355 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:15:25.040366 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:15:25.040378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:25.040392 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:15:25.040404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:25.040416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:25.040428 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:25.040439 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:25.040451 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:15:25.040462 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:15:25.040476 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:15:25.040488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:25.040500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:25.040511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:25.040523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:15:25.040535 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:15:25.040546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:15:25.040558 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:15:25.040569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.040583 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:15:25.040596 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:15:25.040607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:15:25.040620 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:15:25.040631 systemd[1]: Reached target machines.target - Containers. Jul 7 06:15:25.040645 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:15:25.040659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:25.040674 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:25.040688 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:15:25.040700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:25.040712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:25.040725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:25.040736 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:15:25.040748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:25.040774 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:15:25.040786 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:15:25.040800 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:15:25.040812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:15:25.040824 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:15:25.040837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:25.040857 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:25.040868 kernel: loop: module loaded Jul 7 06:15:25.040882 kernel: fuse: init (API version 7.41) Jul 7 06:15:25.040894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:25.040906 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:25.040920 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:15:25.040932 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:15:25.040944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:25.040957 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:15:25.040968 systemd[1]: Stopped verity-setup.service. Jul 7 06:15:25.040983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.040994 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:15:25.041006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:15:25.041018 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:15:25.041030 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:15:25.041046 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:15:25.041078 systemd-journald[1198]: Collecting audit messages is disabled. Jul 7 06:15:25.041100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:15:25.041113 systemd-journald[1198]: Journal started Jul 7 06:15:25.041134 systemd-journald[1198]: Runtime Journal (/run/log/journal/df8afc75a2dc4801b7a853814fff5f94) is 6M, max 48.6M, 42.5M free. Jul 7 06:15:25.046967 kernel: ACPI: bus type drm_connector registered Jul 7 06:15:24.795294 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:15:24.821617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:15:24.822085 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:15:25.047768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:25.051953 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:25.052862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:15:25.054411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:15:25.054701 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:15:25.057132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:25.057368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:25.058900 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:25.059221 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:25.060646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:25.060902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:25.062494 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:15:25.062723 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:15:25.064168 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:25.064379 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:25.065905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:25.067310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:25.069047 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:15:25.070568 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:15:25.085589 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:25.088257 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:15:25.091875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:15:25.093033 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:15:25.093064 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:25.095054 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:15:25.104483 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:15:25.105858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:25.107266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:15:25.110872 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:15:25.112488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:25.114885 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:15:25.116037 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:25.117141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:25.119917 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:15:25.125065 systemd-journald[1198]: Time spent on flushing to /var/log/journal/df8afc75a2dc4801b7a853814fff5f94 is 12.364ms for 974 entries. Jul 7 06:15:25.125065 systemd-journald[1198]: System Journal (/var/log/journal/df8afc75a2dc4801b7a853814fff5f94) is 8M, max 195.6M, 187.6M free. Jul 7 06:15:25.146870 systemd-journald[1198]: Received client request to flush runtime journal. Jul 7 06:15:25.146904 kernel: loop0: detected capacity change from 0 to 113872 Jul 7 06:15:25.122304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:15:25.127982 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:15:25.129374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:15:25.131026 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:15:25.135435 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:15:25.138902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:15:25.150183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:25.152401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:15:25.167638 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 7 06:15:25.167656 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 7 06:15:25.169583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:25.173266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:15:25.178351 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:15:25.180777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:15:25.189989 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:15:25.196771 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 06:15:25.222977 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:15:25.225829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:25.230780 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 06:15:25.259511 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 7 06:15:25.259832 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 7 06:15:25.263770 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 06:15:25.265449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:25.274781 kernel: loop4: detected capacity change from 0 to 146240 Jul 7 06:15:25.287783 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 06:15:25.295915 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:15:25.296904 (sd-merge)[1273]: Merged extensions into '/usr'. Jul 7 06:15:25.301024 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:15:25.301038 systemd[1]: Reloading... Jul 7 06:15:25.369779 zram_generator::config[1303]: No configuration found. Jul 7 06:15:25.426820 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:15:25.464823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:25.545205 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:15:25.545378 systemd[1]: Reloading finished in 243 ms. Jul 7 06:15:25.570851 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:15:25.572563 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:15:25.594040 systemd[1]: Starting ensure-sysext.service... Jul 7 06:15:25.595891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:25.605343 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:15:25.605359 systemd[1]: Reloading... Jul 7 06:15:25.617581 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:15:25.617621 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:15:25.617929 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:15:25.618175 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:15:25.619039 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:15:25.619298 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jul 7 06:15:25.619371 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jul 7 06:15:25.644577 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:25.644589 systemd-tmpfiles[1338]: Skipping /boot Jul 7 06:15:25.658783 zram_generator::config[1368]: No configuration found. Jul 7 06:15:25.662444 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:25.662922 systemd-tmpfiles[1338]: Skipping /boot Jul 7 06:15:25.760331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:25.841119 systemd[1]: Reloading finished in 235 ms. Jul 7 06:15:25.866388 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:15:25.879864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:25.888887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:15:25.891317 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:15:25.893853 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:15:25.902921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:25.905491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:25.908662 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:15:25.913287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.914511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:25.916698 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:25.920702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:25.923153 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:25.924329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:25.924427 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:25.924524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.932962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.933167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:25.933375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:25.933504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:25.936959 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:15:25.938025 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.939390 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:15:25.942364 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:15:25.944334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:25.944571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:25.946255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:25.946468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:25.948240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:25.949142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:25.956865 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Jul 7 06:15:25.959544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.959783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:25.963827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:25.966037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:25.971998 augenrules[1441]: No rules Jul 7 06:15:25.977027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:25.979923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:25.981113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:25.981218 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:25.983378 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:15:25.984531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:25.986489 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:15:25.990055 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:15:25.993996 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:15:25.995552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:25.997304 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:15:25.999292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:25.999581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:26.001165 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:26.001562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:26.003739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:26.003989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:26.005584 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:26.005987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:26.007851 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:15:26.021892 systemd[1]: Finished ensure-sysext.service. Jul 7 06:15:26.034633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:26.035913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:26.035977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:26.038933 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:15:26.040232 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:15:26.079646 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:15:26.139554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:15:26.145778 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:15:26.142887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:15:26.145503 systemd-resolved[1407]: Positive Trust Anchors: Jul 7 06:15:26.145511 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:26.145543 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:26.149470 systemd-resolved[1407]: Defaulting to hostname 'linux'. Jul 7 06:15:26.151295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:26.152500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:26.158793 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:15:26.165827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:15:26.172912 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:15:26.184119 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:15:26.184370 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:15:26.199328 systemd-networkd[1488]: lo: Link UP Jul 7 06:15:26.199337 systemd-networkd[1488]: lo: Gained carrier Jul 7 06:15:26.200941 systemd-networkd[1488]: Enumeration completed Jul 7 06:15:26.201035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:26.201300 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:26.201311 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:26.201980 systemd-networkd[1488]: eth0: Link UP Jul 7 06:15:26.202152 systemd-networkd[1488]: eth0: Gained carrier Jul 7 06:15:26.202171 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:26.202414 systemd[1]: Reached target network.target - Network. Jul 7 06:15:26.204879 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:15:26.207928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:15:26.217822 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:15:26.234134 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:15:26.235870 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:26.237015 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:15:26.689427 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:15:26.689472 systemd-timesyncd[1489]: Initial clock synchronization to Mon 2025-07-07 06:15:26.689343 UTC. Jul 7 06:15:26.689905 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:15:26.691135 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:15:26.691369 systemd-resolved[1407]: Clock change detected. Flushing caches. Jul 7 06:15:26.692260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:15:26.693499 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:15:26.693523 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:26.694432 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:15:26.696547 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:15:26.697731 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:15:26.699028 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:26.700911 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:15:26.703850 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:15:26.709926 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:15:26.711563 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:15:26.714390 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:15:26.753223 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:15:26.754842 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:15:26.757261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:15:26.758634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:15:26.764725 kernel: kvm_amd: TSC scaling supported Jul 7 06:15:26.764754 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:15:26.764767 kernel: kvm_amd: Nested Paging enabled Jul 7 06:15:26.765644 kernel: kvm_amd: LBR virtualization supported Jul 7 06:15:26.765662 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:15:26.766637 kernel: kvm_amd: Virtual GIF supported Jul 7 06:15:26.776888 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:26.779057 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:26.780072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:26.780169 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:26.783536 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:15:26.785594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:15:26.787582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:15:26.789675 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:15:26.793574 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:15:26.795403 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:15:26.798410 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:15:26.802956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:15:26.805554 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:15:26.812516 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:15:26.813298 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 7 06:15:26.814365 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 7 06:15:26.815388 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:15:26.820535 extend-filesystems[1526]: Found /dev/vda6 Jul 7 06:15:26.823586 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:15:26.824179 jq[1525]: false Jul 7 06:15:26.824647 extend-filesystems[1526]: Found /dev/vda9 Jul 7 06:15:26.825432 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:15:26.825945 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:15:26.826404 extend-filesystems[1526]: Checking size of /dev/vda9 Jul 7 06:15:26.828409 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:15:26.830280 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 7 06:15:26.830273 oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 7 06:15:26.830365 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:26.830297 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:26.831975 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:15:26.836725 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 7 06:15:26.836526 oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 7 06:15:26.843247 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:15:26.845790 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:15:26.846029 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:15:26.848776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:15:26.849419 extend-filesystems[1526]: Resized partition /dev/vda9 Jul 7 06:15:26.849042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:15:26.851795 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 7 06:15:26.851823 extend-filesystems[1554]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:15:26.849622 oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 7 06:15:26.852866 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:26.852616 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:26.857659 update_engine[1541]: I20250707 06:15:26.857594 1541 main.cc:92] Flatcar Update Engine starting Jul 7 06:15:26.864482 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:15:26.863616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:26.866934 jq[1543]: true Jul 7 06:15:26.866967 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:15:26.867262 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:15:26.869185 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:15:26.870612 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:15:26.885339 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:15:26.887404 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:15:26.890012 tar[1553]: linux-amd64/helm Jul 7 06:15:26.891703 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:15:26.899360 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:15:26.908865 update_engine[1541]: I20250707 06:15:26.903648 1541 update_check_scheduler.cc:74] Next update check in 7m30s Jul 7 06:15:26.899133 dbus-daemon[1523]: [system] SELinux support is enabled Jul 7 06:15:26.909128 jq[1564]: true Jul 7 06:15:26.909246 extend-filesystems[1554]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:15:26.909246 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:15:26.909246 extend-filesystems[1554]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:15:26.913060 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Jul 7 06:15:26.917441 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:15:26.917719 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:15:26.945684 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:15:26.947082 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:15:26.947104 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:15:26.948413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:15:26.948428 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:15:26.952475 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:15:26.995191 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:15:26.997594 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:15:26.998893 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:15:27.040822 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:15:27.054117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:27.074966 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:15:27.074996 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:15:27.075688 systemd-logind[1538]: New seat seat0. Jul 7 06:15:27.082405 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:15:27.127553 containerd[1566]: time="2025-07-07T06:15:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:15:27.129469 containerd[1566]: time="2025-07-07T06:15:27.129415012Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:15:27.139743 containerd[1566]: time="2025-07-07T06:15:27.139695789Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.209µs" Jul 7 06:15:27.139743 containerd[1566]: time="2025-07-07T06:15:27.139732889Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:15:27.139799 containerd[1566]: time="2025-07-07T06:15:27.139754600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:15:27.139965 containerd[1566]: time="2025-07-07T06:15:27.139937002Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:15:27.139965 containerd[1566]: time="2025-07-07T06:15:27.139959785Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:15:27.140022 containerd[1566]: time="2025-07-07T06:15:27.139988919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140103 containerd[1566]: time="2025-07-07T06:15:27.140071063Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140103 containerd[1566]: time="2025-07-07T06:15:27.140092413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140467 containerd[1566]: time="2025-07-07T06:15:27.140432031Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140467 containerd[1566]: time="2025-07-07T06:15:27.140458150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140510 containerd[1566]: time="2025-07-07T06:15:27.140472136Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140510 containerd[1566]: time="2025-07-07T06:15:27.140483727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140626 containerd[1566]: time="2025-07-07T06:15:27.140603392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140866 containerd[1566]: time="2025-07-07T06:15:27.140838152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140893 containerd[1566]: time="2025-07-07T06:15:27.140872717Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:27.140893 containerd[1566]: time="2025-07-07T06:15:27.140882796Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:15:27.140931 containerd[1566]: time="2025-07-07T06:15:27.140919385Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:15:27.141162 containerd[1566]: time="2025-07-07T06:15:27.141142183Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:15:27.141223 containerd[1566]: time="2025-07-07T06:15:27.141206644Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:15:27.147082 containerd[1566]: time="2025-07-07T06:15:27.147037182Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:15:27.147119 containerd[1566]: time="2025-07-07T06:15:27.147086495Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:15:27.147119 containerd[1566]: time="2025-07-07T06:15:27.147099770Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:15:27.147119 containerd[1566]: time="2025-07-07T06:15:27.147110510Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:15:27.147172 containerd[1566]: time="2025-07-07T06:15:27.147120859Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:15:27.147172 containerd[1566]: time="2025-07-07T06:15:27.147130397Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:15:27.147172 containerd[1566]: time="2025-07-07T06:15:27.147142460Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:15:27.147172 containerd[1566]: time="2025-07-07T06:15:27.147153290Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:15:27.147252 containerd[1566]: time="2025-07-07T06:15:27.147176664Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:15:27.147252 containerd[1566]: time="2025-07-07T06:15:27.147187454Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:15:27.147252 containerd[1566]: time="2025-07-07T06:15:27.147196792Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:15:27.147252 containerd[1566]: time="2025-07-07T06:15:27.147208724Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:15:27.147387 containerd[1566]: time="2025-07-07T06:15:27.147346663Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:15:27.147478 containerd[1566]: time="2025-07-07T06:15:27.147443224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:15:27.147478 containerd[1566]: time="2025-07-07T06:15:27.147470786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:15:27.147523 containerd[1566]: time="2025-07-07T06:15:27.147485794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:15:27.147523 containerd[1566]: time="2025-07-07T06:15:27.147498758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:15:27.147523 containerd[1566]: time="2025-07-07T06:15:27.147509468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:15:27.147523 containerd[1566]: time="2025-07-07T06:15:27.147520138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:15:27.147617 containerd[1566]: time="2025-07-07T06:15:27.147531009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:15:27.147617 containerd[1566]: time="2025-07-07T06:15:27.147548151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:15:27.147617 containerd[1566]: time="2025-07-07T06:15:27.147557959Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:15:27.147617 containerd[1566]: time="2025-07-07T06:15:27.147568900Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:15:27.147691 containerd[1566]: time="2025-07-07T06:15:27.147625787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:15:27.147691 containerd[1566]: time="2025-07-07T06:15:27.147644051Z" level=info msg="Start snapshots syncer" Jul 7 06:15:27.147691 containerd[1566]: time="2025-07-07T06:15:27.147675630Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:15:27.147991 containerd[1566]: time="2025-07-07T06:15:27.147938593Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:15:27.147991 containerd[1566]: time="2025-07-07T06:15:27.147990411Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:15:27.148983 containerd[1566]: time="2025-07-07T06:15:27.148943829Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:15:27.149135 containerd[1566]: time="2025-07-07T06:15:27.149097488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:15:27.149135 containerd[1566]: time="2025-07-07T06:15:27.149128446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:15:27.149182 containerd[1566]: time="2025-07-07T06:15:27.149142382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:15:27.149182 containerd[1566]: time="2025-07-07T06:15:27.149156608Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:15:27.149182 containerd[1566]: time="2025-07-07T06:15:27.149169312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:15:27.149244 containerd[1566]: time="2025-07-07T06:15:27.149181685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:15:27.149244 containerd[1566]: time="2025-07-07T06:15:27.149197114Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:15:27.149244 containerd[1566]: time="2025-07-07T06:15:27.149223514Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:15:27.149297 containerd[1566]: time="2025-07-07T06:15:27.149245004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:15:27.149297 containerd[1566]: time="2025-07-07T06:15:27.149258930Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:15:27.149425 containerd[1566]: time="2025-07-07T06:15:27.149389545Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:27.149458 containerd[1566]: time="2025-07-07T06:15:27.149420794Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:27.149458 containerd[1566]: time="2025-07-07T06:15:27.149432917Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:27.149458 containerd[1566]: time="2025-07-07T06:15:27.149444749Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:27.149458 containerd[1566]: time="2025-07-07T06:15:27.149454698Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149466790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149478633Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149497889Z" level=info msg="runtime interface created" Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149504631Z" level=info msg="created NRI interface" Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149514340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:15:27.149529 containerd[1566]: time="2025-07-07T06:15:27.149525901Z" level=info msg="Connect containerd service" Jul 7 06:15:27.149634 containerd[1566]: time="2025-07-07T06:15:27.149549596Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:15:27.152283 containerd[1566]: time="2025-07-07T06:15:27.152242107Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:15:27.194707 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:15:27.218379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:15:27.221370 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:15:27.238080 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:15:27.238355 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:15:27.241180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:15:27.243390 containerd[1566]: time="2025-07-07T06:15:27.243347006Z" level=info msg="Start subscribing containerd event" Jul 7 06:15:27.243496 containerd[1566]: time="2025-07-07T06:15:27.243401499Z" level=info msg="Start recovering state" Jul 7 06:15:27.243496 containerd[1566]: time="2025-07-07T06:15:27.243480737Z" level=info msg="Start event monitor" Jul 7 06:15:27.243496 containerd[1566]: time="2025-07-07T06:15:27.243495244Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:15:27.243553 containerd[1566]: time="2025-07-07T06:15:27.243501536Z" level=info msg="Start streaming server" Jul 7 06:15:27.243553 containerd[1566]: time="2025-07-07T06:15:27.243508900Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:15:27.243553 containerd[1566]: time="2025-07-07T06:15:27.243515793Z" level=info msg="runtime interface starting up..." Jul 7 06:15:27.243553 containerd[1566]: time="2025-07-07T06:15:27.243521404Z" level=info msg="starting plugins..." Jul 7 06:15:27.243553 containerd[1566]: time="2025-07-07T06:15:27.243533116Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:15:27.243747 containerd[1566]: time="2025-07-07T06:15:27.243727540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:15:27.243795 containerd[1566]: time="2025-07-07T06:15:27.243780159Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:15:27.243931 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:15:27.244105 containerd[1566]: time="2025-07-07T06:15:27.244086133Z" level=info msg="containerd successfully booted in 0.117161s" Jul 7 06:15:27.262162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:15:27.265138 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:15:27.267216 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:15:27.268505 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:15:27.367116 tar[1553]: linux-amd64/LICENSE Jul 7 06:15:27.367203 tar[1553]: linux-amd64/README.md Jul 7 06:15:27.387409 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:15:28.053502 systemd-networkd[1488]: eth0: Gained IPv6LL Jul 7 06:15:28.056793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:15:28.058564 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:15:28.061157 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:15:28.063483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:28.073624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:15:28.095687 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:15:28.095997 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:15:28.097853 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:15:28.099963 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:15:28.759577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:28.761087 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:15:28.762354 systemd[1]: Startup finished in 3.053s (kernel) + 5.651s (initrd) + 4.094s (userspace) = 12.799s. Jul 7 06:15:28.793624 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:29.193782 kubelet[1668]: E0707 06:15:29.193643 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:29.197752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:29.197997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:29.198441 systemd[1]: kubelet.service: Consumed 952ms CPU time, 265.5M memory peak. Jul 7 06:15:32.814424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:15:32.815616 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:60624.service - OpenSSH per-connection server daemon (10.0.0.1:60624). Jul 7 06:15:32.886703 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 60624 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:32.888164 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:32.894482 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:15:32.895582 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:15:32.901912 systemd-logind[1538]: New session 1 of user core. Jul 7 06:15:32.916847 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:15:32.919691 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:15:32.940698 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:15:32.942872 systemd-logind[1538]: New session c1 of user core. Jul 7 06:15:33.096296 systemd[1685]: Queued start job for default target default.target. Jul 7 06:15:33.114512 systemd[1685]: Created slice app.slice - User Application Slice. Jul 7 06:15:33.114537 systemd[1685]: Reached target paths.target - Paths. Jul 7 06:15:33.114576 systemd[1685]: Reached target timers.target - Timers. Jul 7 06:15:33.116046 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:15:33.127363 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:15:33.127497 systemd[1685]: Reached target sockets.target - Sockets. Jul 7 06:15:33.127543 systemd[1685]: Reached target basic.target - Basic System. Jul 7 06:15:33.127588 systemd[1685]: Reached target default.target - Main User Target. Jul 7 06:15:33.127627 systemd[1685]: Startup finished in 178ms. Jul 7 06:15:33.127827 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:15:33.129329 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:15:33.193528 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:60628.service - OpenSSH per-connection server daemon (10.0.0.1:60628). Jul 7 06:15:33.253778 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 60628 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:33.255247 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:33.260597 systemd-logind[1538]: New session 2 of user core. Jul 7 06:15:33.274516 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:15:33.328778 sshd[1699]: Connection closed by 10.0.0.1 port 60628 Jul 7 06:15:33.329180 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:33.338428 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:60628.service: Deactivated successfully. Jul 7 06:15:33.340162 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:15:33.340982 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:15:33.343892 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:60632.service - OpenSSH per-connection server daemon (10.0.0.1:60632). Jul 7 06:15:33.344518 systemd-logind[1538]: Removed session 2. Jul 7 06:15:33.390258 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 60632 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:33.392214 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:33.397591 systemd-logind[1538]: New session 3 of user core. Jul 7 06:15:33.407529 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:15:33.457677 sshd[1707]: Connection closed by 10.0.0.1 port 60632 Jul 7 06:15:33.458059 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:33.472535 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:60632.service: Deactivated successfully. Jul 7 06:15:33.474413 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:15:33.475292 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:15:33.478548 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:60642.service - OpenSSH per-connection server daemon (10.0.0.1:60642). Jul 7 06:15:33.479277 systemd-logind[1538]: Removed session 3. Jul 7 06:15:33.532719 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 60642 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:33.534430 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:33.539193 systemd-logind[1538]: New session 4 of user core. Jul 7 06:15:33.548472 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:15:33.601886 sshd[1715]: Connection closed by 10.0.0.1 port 60642 Jul 7 06:15:33.602238 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:33.616247 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:60642.service: Deactivated successfully. Jul 7 06:15:33.618106 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:15:33.618901 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:15:33.621816 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Jul 7 06:15:33.622562 systemd-logind[1538]: Removed session 4. Jul 7 06:15:33.675229 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:33.676991 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:33.681789 systemd-logind[1538]: New session 5 of user core. Jul 7 06:15:33.698526 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:15:33.756452 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:15:33.756768 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:33.779817 sudo[1724]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:33.781692 sshd[1723]: Connection closed by 10.0.0.1 port 60658 Jul 7 06:15:33.782060 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:33.807203 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:60658.service: Deactivated successfully. Jul 7 06:15:33.809368 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:15:33.810234 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:15:33.813712 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:60664.service - OpenSSH per-connection server daemon (10.0.0.1:60664). Jul 7 06:15:33.814387 systemd-logind[1538]: Removed session 5. Jul 7 06:15:33.872019 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60664 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:33.873724 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:33.878256 systemd-logind[1538]: New session 6 of user core. Jul 7 06:15:33.887451 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:15:33.941106 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:15:33.941429 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:34.183905 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:34.190892 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:15:34.191235 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:34.201912 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:15:34.256801 augenrules[1756]: No rules Jul 7 06:15:34.258561 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:15:34.258812 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:15:34.260082 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:34.261704 sshd[1732]: Connection closed by 10.0.0.1 port 60664 Jul 7 06:15:34.261984 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:34.274049 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:60664.service: Deactivated successfully. Jul 7 06:15:34.275965 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:15:34.276723 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:15:34.279695 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:60676.service - OpenSSH per-connection server daemon (10.0.0.1:60676). Jul 7 06:15:34.280320 systemd-logind[1538]: Removed session 6. Jul 7 06:15:34.340478 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 60676 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:15:34.341810 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:34.346340 systemd-logind[1538]: New session 7 of user core. Jul 7 06:15:34.360529 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:15:34.413002 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:15:34.413332 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:34.719360 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:15:34.738811 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:15:34.956619 dockerd[1788]: time="2025-07-07T06:15:34.956547902Z" level=info msg="Starting up" Jul 7 06:15:34.958078 dockerd[1788]: time="2025-07-07T06:15:34.958036555Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:15:35.032195 dockerd[1788]: time="2025-07-07T06:15:35.032047845Z" level=info msg="Loading containers: start." Jul 7 06:15:35.042346 kernel: Initializing XFRM netlink socket Jul 7 06:15:35.330933 systemd-networkd[1488]: docker0: Link UP Jul 7 06:15:35.337490 dockerd[1788]: time="2025-07-07T06:15:35.337430565Z" level=info msg="Loading containers: done." Jul 7 06:15:35.358386 dockerd[1788]: time="2025-07-07T06:15:35.358307922Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:15:35.358590 dockerd[1788]: time="2025-07-07T06:15:35.358447524Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:15:35.358650 dockerd[1788]: time="2025-07-07T06:15:35.358638593Z" level=info msg="Initializing buildkit" Jul 7 06:15:35.396872 dockerd[1788]: time="2025-07-07T06:15:35.396768477Z" level=info msg="Completed buildkit initialization" Jul 7 06:15:35.405516 dockerd[1788]: time="2025-07-07T06:15:35.404898259Z" level=info msg="Daemon has completed initialization" Jul 7 06:15:35.405729 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:15:35.406210 dockerd[1788]: time="2025-07-07T06:15:35.406060940Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:15:36.302227 containerd[1566]: time="2025-07-07T06:15:36.302160697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:15:36.870901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643771166.mount: Deactivated successfully. Jul 7 06:15:37.688099 containerd[1566]: time="2025-07-07T06:15:37.688017384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:37.688848 containerd[1566]: time="2025-07-07T06:15:37.688784102Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 06:15:37.689989 containerd[1566]: time="2025-07-07T06:15:37.689944920Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:37.692380 containerd[1566]: time="2025-07-07T06:15:37.692342418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:37.693230 containerd[1566]: time="2025-07-07T06:15:37.693197612Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.390984366s" Jul 7 06:15:37.693264 containerd[1566]: time="2025-07-07T06:15:37.693239531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:15:37.693748 containerd[1566]: time="2025-07-07T06:15:37.693722286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:15:38.816627 containerd[1566]: time="2025-07-07T06:15:38.816565290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:38.817393 containerd[1566]: time="2025-07-07T06:15:38.817345724Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 06:15:38.818709 containerd[1566]: time="2025-07-07T06:15:38.818676030Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:38.821288 containerd[1566]: time="2025-07-07T06:15:38.821239770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:38.822234 containerd[1566]: time="2025-07-07T06:15:38.822184893Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.128433752s" Jul 7 06:15:38.822234 containerd[1566]: time="2025-07-07T06:15:38.822219408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:15:38.823184 containerd[1566]: time="2025-07-07T06:15:38.823146397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:15:39.448669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:39.450803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:39.638067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:39.641988 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:39.675437 kubelet[2063]: E0707 06:15:39.675350 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:39.681195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:39.681433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:39.681819 systemd[1]: kubelet.service: Consumed 212ms CPU time, 110.6M memory peak. Jul 7 06:15:40.568510 containerd[1566]: time="2025-07-07T06:15:40.568459559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:40.569410 containerd[1566]: time="2025-07-07T06:15:40.569285528Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 06:15:40.570494 containerd[1566]: time="2025-07-07T06:15:40.570458970Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:40.573332 containerd[1566]: time="2025-07-07T06:15:40.572845557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:40.574968 containerd[1566]: time="2025-07-07T06:15:40.574920650Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.751739699s" Jul 7 06:15:40.574968 containerd[1566]: time="2025-07-07T06:15:40.574960705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:15:40.575502 containerd[1566]: time="2025-07-07T06:15:40.575475341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:15:41.780342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858644275.mount: Deactivated successfully. Jul 7 06:15:42.201689 containerd[1566]: time="2025-07-07T06:15:42.201554582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.202483 containerd[1566]: time="2025-07-07T06:15:42.202427850Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 06:15:42.203684 containerd[1566]: time="2025-07-07T06:15:42.203649021Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.205464 containerd[1566]: time="2025-07-07T06:15:42.205431475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:42.205961 containerd[1566]: time="2025-07-07T06:15:42.205907137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.630403614s" Jul 7 06:15:42.205961 containerd[1566]: time="2025-07-07T06:15:42.205958934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:15:42.206620 containerd[1566]: time="2025-07-07T06:15:42.206565683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:15:42.722423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299940949.mount: Deactivated successfully. Jul 7 06:15:43.709465 containerd[1566]: time="2025-07-07T06:15:43.709403293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:43.710135 containerd[1566]: time="2025-07-07T06:15:43.710062630Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:15:43.711463 containerd[1566]: time="2025-07-07T06:15:43.711413625Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:43.714266 containerd[1566]: time="2025-07-07T06:15:43.714222875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:43.715671 containerd[1566]: time="2025-07-07T06:15:43.715542922Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.508937384s" Jul 7 06:15:43.715717 containerd[1566]: time="2025-07-07T06:15:43.715676462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:15:43.716472 containerd[1566]: time="2025-07-07T06:15:43.716288751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:15:44.279489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163864348.mount: Deactivated successfully. Jul 7 06:15:44.285251 containerd[1566]: time="2025-07-07T06:15:44.285210278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:15:44.285957 containerd[1566]: time="2025-07-07T06:15:44.285913658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:15:44.287968 containerd[1566]: time="2025-07-07T06:15:44.287917357Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:15:44.290895 containerd[1566]: time="2025-07-07T06:15:44.290846462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:15:44.291384 containerd[1566]: time="2025-07-07T06:15:44.291353003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.022263ms" Jul 7 06:15:44.291384 containerd[1566]: time="2025-07-07T06:15:44.291380704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:15:44.291928 containerd[1566]: time="2025-07-07T06:15:44.291898486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:15:45.616952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131894801.mount: Deactivated successfully. Jul 7 06:15:48.328673 containerd[1566]: time="2025-07-07T06:15:48.328517211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.329428 containerd[1566]: time="2025-07-07T06:15:48.329367326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 06:15:48.330701 containerd[1566]: time="2025-07-07T06:15:48.330648509Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.333835 containerd[1566]: time="2025-07-07T06:15:48.333795173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:48.334781 containerd[1566]: time="2025-07-07T06:15:48.334729005Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.042803307s" Jul 7 06:15:48.334820 containerd[1566]: time="2025-07-07T06:15:48.334782545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:15:49.932036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:15:49.933790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:50.227126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:50.240566 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:50.286753 kubelet[2226]: E0707 06:15:50.286678 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:50.290981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:50.291190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:50.291612 systemd[1]: kubelet.service: Consumed 299ms CPU time, 111.1M memory peak. Jul 7 06:15:50.864715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:50.864933 systemd[1]: kubelet.service: Consumed 299ms CPU time, 111.1M memory peak. Jul 7 06:15:50.867621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:50.894410 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... Jul 7 06:15:50.894433 systemd[1]: Reloading... Jul 7 06:15:50.980397 zram_generator::config[2283]: No configuration found. Jul 7 06:15:51.475554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:51.591474 systemd[1]: Reloading finished in 696 ms. Jul 7 06:15:51.660919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:15:51.661024 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:15:51.661357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:51.661398 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.2M memory peak. Jul 7 06:15:51.663160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:51.830993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:51.835047 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:15:51.876089 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:15:51.876089 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:15:51.876089 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:15:51.876543 kubelet[2332]: I0707 06:15:51.876134 2332 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:15:52.587170 kubelet[2332]: I0707 06:15:52.587118 2332 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:15:52.587170 kubelet[2332]: I0707 06:15:52.587150 2332 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:15:52.587428 kubelet[2332]: I0707 06:15:52.587403 2332 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:15:52.612511 kubelet[2332]: E0707 06:15:52.612459 2332 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:52.615850 kubelet[2332]: I0707 06:15:52.615818 2332 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:15:52.622581 kubelet[2332]: I0707 06:15:52.622541 2332 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:15:52.629207 kubelet[2332]: I0707 06:15:52.629172 2332 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:15:52.629375 kubelet[2332]: I0707 06:15:52.629278 2332 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:15:52.629504 kubelet[2332]: I0707 06:15:52.629470 2332 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:15:52.629689 kubelet[2332]: I0707 06:15:52.629503 2332 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:15:52.629877 kubelet[2332]: I0707 06:15:52.629701 2332 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:15:52.629877 kubelet[2332]: I0707 06:15:52.629712 2332 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:15:52.629877 kubelet[2332]: I0707 06:15:52.629840 2332 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:15:52.631815 kubelet[2332]: I0707 06:15:52.631790 2332 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:15:52.631815 kubelet[2332]: I0707 06:15:52.631813 2332 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:15:52.631896 kubelet[2332]: I0707 06:15:52.631852 2332 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:15:52.631896 kubelet[2332]: I0707 06:15:52.631874 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:15:52.638249 kubelet[2332]: W0707 06:15:52.638182 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:52.638249 kubelet[2332]: W0707 06:15:52.638208 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:52.638377 kubelet[2332]: E0707 06:15:52.638268 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:52.638377 kubelet[2332]: E0707 06:15:52.638265 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:52.638445 kubelet[2332]: I0707 06:15:52.638412 2332 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:15:52.639160 kubelet[2332]: I0707 06:15:52.639073 2332 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:15:52.639160 kubelet[2332]: W0707 06:15:52.639160 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:15:52.641456 kubelet[2332]: I0707 06:15:52.641429 2332 server.go:1274] "Started kubelet" Jul 7 06:15:52.641513 kubelet[2332]: I0707 06:15:52.641485 2332 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:15:52.642173 kubelet[2332]: I0707 06:15:52.641581 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:15:52.642173 kubelet[2332]: I0707 06:15:52.641951 2332 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:15:52.642589 kubelet[2332]: I0707 06:15:52.642571 2332 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:15:52.643014 kubelet[2332]: I0707 06:15:52.642986 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:15:52.645217 kubelet[2332]: I0707 06:15:52.645182 2332 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:15:52.645305 kubelet[2332]: I0707 06:15:52.645285 2332 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:15:52.646690 kubelet[2332]: E0707 06:15:52.645381 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe387d739f63f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:15:52.641402431 +0000 UTC m=+0.798704147,LastTimestamp:2025-07-07 06:15:52.641402431 +0000 UTC m=+0.798704147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:15:52.646834 kubelet[2332]: W0707 06:15:52.646530 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:52.646928 kubelet[2332]: E0707 06:15:52.646914 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:52.647000 kubelet[2332]: E0707 06:15:52.646599 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:52.647075 kubelet[2332]: I0707 06:15:52.647054 2332 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:15:52.647128 kubelet[2332]: I0707 06:15:52.647119 2332 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:15:52.647183 kubelet[2332]: I0707 06:15:52.647140 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:15:52.647605 kubelet[2332]: I0707 06:15:52.647382 2332 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:15:52.647683 kubelet[2332]: E0707 06:15:52.647628 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Jul 7 06:15:52.648067 kubelet[2332]: E0707 06:15:52.648049 2332 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:15:52.648657 kubelet[2332]: I0707 06:15:52.648624 2332 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:15:52.662674 kubelet[2332]: I0707 06:15:52.662526 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:15:52.663828 kubelet[2332]: I0707 06:15:52.663793 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:15:52.663828 kubelet[2332]: I0707 06:15:52.663822 2332 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:15:52.663904 kubelet[2332]: I0707 06:15:52.663839 2332 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:15:52.663904 kubelet[2332]: E0707 06:15:52.663879 2332 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:15:52.666255 kubelet[2332]: I0707 06:15:52.666241 2332 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:15:52.666342 kubelet[2332]: I0707 06:15:52.666330 2332 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:15:52.666405 kubelet[2332]: I0707 06:15:52.666386 2332 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:15:52.667303 kubelet[2332]: W0707 06:15:52.667258 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:52.668803 kubelet[2332]: E0707 06:15:52.668771 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:52.747145 kubelet[2332]: E0707 06:15:52.747100 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:52.764416 kubelet[2332]: E0707 06:15:52.764381 2332 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:15:52.847676 kubelet[2332]: E0707 06:15:52.847570 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:52.848389 kubelet[2332]: E0707 06:15:52.848344 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Jul 7 06:15:52.947744 kubelet[2332]: E0707 06:15:52.947697 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:52.965024 kubelet[2332]: E0707 06:15:52.964984 2332 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:15:53.048454 kubelet[2332]: E0707 06:15:53.048409 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:53.149125 kubelet[2332]: E0707 06:15:53.148996 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:53.249005 kubelet[2332]: E0707 06:15:53.248919 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Jul 7 06:15:53.250007 kubelet[2332]: E0707 06:15:53.249955 2332 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:15:53.297295 kubelet[2332]: I0707 06:15:53.297246 2332 policy_none.go:49] "None policy: Start" Jul 7 06:15:53.298247 kubelet[2332]: I0707 06:15:53.298210 2332 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:15:53.298247 kubelet[2332]: I0707 06:15:53.298237 2332 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:15:53.310072 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:15:53.326408 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:15:53.330068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:15:53.349346 kubelet[2332]: I0707 06:15:53.349254 2332 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:15:53.349486 kubelet[2332]: I0707 06:15:53.349455 2332 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:15:53.349486 kubelet[2332]: I0707 06:15:53.349471 2332 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:15:53.349794 kubelet[2332]: I0707 06:15:53.349769 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:15:53.351115 kubelet[2332]: E0707 06:15:53.351074 2332 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:15:53.373831 systemd[1]: Created slice kubepods-burstable-pod39b01e1174f21096ff2efe04dee408a5.slice - libcontainer container kubepods-burstable-pod39b01e1174f21096ff2efe04dee408a5.slice. Jul 7 06:15:53.394093 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 06:15:53.407939 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 06:15:53.451455 kubelet[2332]: I0707 06:15:53.451387 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:15:53.451568 kubelet[2332]: I0707 06:15:53.451436 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451652 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451722 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451767 2332 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451789 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451821 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:15:53.452056 kubelet[2332]: I0707 06:15:53.451843 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:15:53.452204 kubelet[2332]: I0707 06:15:53.451860 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:15:53.452204 kubelet[2332]: I0707 06:15:53.451922 2332 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:15:53.452204 kubelet[2332]: E0707 06:15:53.452129 2332 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jul 7 06:15:53.543867 kubelet[2332]: W0707 06:15:53.543776 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:53.543867 kubelet[2332]: E0707 06:15:53.543861 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:53.634496 kubelet[2332]: W0707 06:15:53.634414 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:53.634572 kubelet[2332]: E0707 06:15:53.634510 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:53.653872 kubelet[2332]: I0707 06:15:53.653828 2332 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:15:53.654255 kubelet[2332]: E0707 06:15:53.654210 2332 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jul 7 06:15:53.693519 containerd[1566]: time="2025-07-07T06:15:53.693374426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39b01e1174f21096ff2efe04dee408a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:53.706204 containerd[1566]: time="2025-07-07T06:15:53.706137260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:53.711012 containerd[1566]: time="2025-07-07T06:15:53.710986487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:54.008382 kubelet[2332]: W0707 06:15:54.008236 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:54.008382 kubelet[2332]: E0707 06:15:54.008301 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:54.016948 kubelet[2332]: W0707 06:15:54.016907 2332 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jul 7 06:15:54.016993 kubelet[2332]: E0707 06:15:54.016950 2332 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:54.049688 kubelet[2332]: E0707 06:15:54.049650 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Jul 7 06:15:54.055595 kubelet[2332]: I0707 06:15:54.055572 2332 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:15:54.055809 kubelet[2332]: E0707 06:15:54.055788 2332 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jul 7 06:15:54.783962 kubelet[2332]: E0707 06:15:54.783924 2332 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:15:54.857450 kubelet[2332]: I0707 06:15:54.857432 2332 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:15:54.857670 kubelet[2332]: E0707 06:15:54.857639 2332 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jul 7 06:15:54.998547 containerd[1566]: time="2025-07-07T06:15:54.998453698Z" level=info msg="connecting to shim aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812" address="unix:///run/containerd/s/f64607f4186d071546ea0828c6260643193035e14237350c325eb28be15d56ca" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:55.004796 containerd[1566]: time="2025-07-07T06:15:55.004569231Z" level=info msg="connecting to shim 98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265" address="unix:///run/containerd/s/f878249fea4957eb027396ed343d8de8787d9f052d21c322d878392f5d47415a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:55.005672 containerd[1566]: time="2025-07-07T06:15:55.005627917Z" level=info msg="connecting to shim 8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0" address="unix:///run/containerd/s/2387fe645a85c02040e04d9d6872c02266daaab9ad0f7a1b4df4551b6d645079" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:55.262543 systemd[1]: Started cri-containerd-98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265.scope - libcontainer container 98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265. Jul 7 06:15:55.267739 systemd[1]: Started cri-containerd-8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0.scope - libcontainer container 8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0. Jul 7 06:15:55.269581 systemd[1]: Started cri-containerd-aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812.scope - libcontainer container aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812. Jul 7 06:15:55.323860 containerd[1566]: time="2025-07-07T06:15:55.323805731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265\"" Jul 7 06:15:55.327993 containerd[1566]: time="2025-07-07T06:15:55.327966557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39b01e1174f21096ff2efe04dee408a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0\"" Jul 7 06:15:55.328882 containerd[1566]: time="2025-07-07T06:15:55.328863139Z" level=info msg="CreateContainer within sandbox \"98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:15:55.329343 containerd[1566]: time="2025-07-07T06:15:55.329209780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812\"" Jul 7 06:15:55.330302 containerd[1566]: time="2025-07-07T06:15:55.330270590Z" level=info msg="CreateContainer within sandbox \"8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:15:55.331821 containerd[1566]: time="2025-07-07T06:15:55.331785312Z" level=info msg="CreateContainer within sandbox \"aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:15:55.341748 containerd[1566]: time="2025-07-07T06:15:55.341716633Z" level=info msg="Container 622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:55.345974 containerd[1566]: time="2025-07-07T06:15:55.345939366Z" level=info msg="Container fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:55.351672 containerd[1566]: time="2025-07-07T06:15:55.351636664Z" level=info msg="Container d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:55.354449 containerd[1566]: time="2025-07-07T06:15:55.354409276Z" level=info msg="CreateContainer within sandbox \"98e4763016ae6bb8b4ffdd3aa87ab86d01dc29ea0d8e3a2260f9cab2fe782265\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef\"" Jul 7 06:15:55.354936 containerd[1566]: time="2025-07-07T06:15:55.354911468Z" level=info msg="StartContainer for \"622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef\"" Jul 7 06:15:55.355884 containerd[1566]: time="2025-07-07T06:15:55.355848786Z" level=info msg="connecting to shim 622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef" address="unix:///run/containerd/s/f878249fea4957eb027396ed343d8de8787d9f052d21c322d878392f5d47415a" protocol=ttrpc version=3 Jul 7 06:15:55.357558 containerd[1566]: time="2025-07-07T06:15:55.357516475Z" level=info msg="CreateContainer within sandbox \"8a65d921f22c833535cf0e35d2a861eb283a407513cd1b2b4ca43db6b0efe7e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2\"" Jul 7 06:15:55.357927 containerd[1566]: time="2025-07-07T06:15:55.357905996Z" level=info msg="StartContainer for \"fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2\"" Jul 7 06:15:55.358954 containerd[1566]: time="2025-07-07T06:15:55.358923595Z" level=info msg="connecting to shim fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2" address="unix:///run/containerd/s/2387fe645a85c02040e04d9d6872c02266daaab9ad0f7a1b4df4551b6d645079" protocol=ttrpc version=3 Jul 7 06:15:55.363737 containerd[1566]: time="2025-07-07T06:15:55.363705075Z" level=info msg="CreateContainer within sandbox \"aed8103a1deb3aee8492895bcbd2c04f4de995f9409448498e8284944d0b3812\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3\"" Jul 7 06:15:55.364344 containerd[1566]: time="2025-07-07T06:15:55.364293419Z" level=info msg="StartContainer for \"d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3\"" Jul 7 06:15:55.365731 containerd[1566]: time="2025-07-07T06:15:55.365698004Z" level=info msg="connecting to shim d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3" address="unix:///run/containerd/s/f64607f4186d071546ea0828c6260643193035e14237350c325eb28be15d56ca" protocol=ttrpc version=3 Jul 7 06:15:55.377460 systemd[1]: Started cri-containerd-622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef.scope - libcontainer container 622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef. Jul 7 06:15:55.380718 systemd[1]: Started cri-containerd-fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2.scope - libcontainer container fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2. Jul 7 06:15:55.388894 systemd[1]: Started cri-containerd-d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3.scope - libcontainer container d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3. Jul 7 06:15:55.436910 containerd[1566]: time="2025-07-07T06:15:55.436868405Z" level=info msg="StartContainer for \"622a3e0ed624b0ad99dfef28ee1fdd32fed60707fee1f5ff9436955c979dd9ef\" returns successfully" Jul 7 06:15:55.440690 containerd[1566]: time="2025-07-07T06:15:55.440648106Z" level=info msg="StartContainer for \"fff9631cdd9f1e926b39beab8ef34cdec24679b413064e87be253ec646cf53a2\" returns successfully" Jul 7 06:15:55.450649 containerd[1566]: time="2025-07-07T06:15:55.450485762Z" level=info msg="StartContainer for \"d699294f071ba4c409e98099d93206ab1d53f43a3e767ef267b06fd2fe7078e3\" returns successfully" Jul 7 06:15:55.651043 kubelet[2332]: E0707 06:15:55.650982 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="3.2s" Jul 7 06:15:56.460208 kubelet[2332]: I0707 06:15:56.460152 2332 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:15:57.546261 kubelet[2332]: I0707 06:15:57.546197 2332 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:15:57.546261 kubelet[2332]: E0707 06:15:57.546240 2332 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:15:57.559397 kubelet[2332]: E0707 06:15:57.559275 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fe387d739f63f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:15:52.641402431 +0000 UTC m=+0.798704147,LastTimestamp:2025-07-07 06:15:52.641402431 +0000 UTC m=+0.798704147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:15:57.635763 kubelet[2332]: I0707 06:15:57.635725 2332 apiserver.go:52] "Watching apiserver" Jul 7 06:15:57.647972 kubelet[2332]: I0707 06:15:57.647939 2332 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:15:57.866723 kubelet[2332]: E0707 06:15:57.866630 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fe387d79f4133 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:15:52.648040755 +0000 UTC m=+0.805342481,LastTimestamp:2025-07-07 06:15:52.648040755 +0000 UTC m=+0.805342481,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:15:59.272278 systemd[1]: Reload requested from client PID 2606 ('systemctl') (unit session-7.scope)... Jul 7 06:15:59.272291 systemd[1]: Reloading... Jul 7 06:15:59.489448 zram_generator::config[2648]: No configuration found. Jul 7 06:15:59.605743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:59.737691 systemd[1]: Reloading finished in 465 ms. Jul 7 06:15:59.771285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:59.794451 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:15:59.794790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:59.794842 systemd[1]: kubelet.service: Consumed 1.279s CPU time, 132.2M memory peak. Jul 7 06:15:59.796663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:00.006281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:00.012704 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:16:00.058363 kubelet[2694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:00.058363 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:16:00.058363 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:00.058915 kubelet[2694]: I0707 06:16:00.058424 2694 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:16:00.064623 kubelet[2694]: I0707 06:16:00.064591 2694 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:16:00.064623 kubelet[2694]: I0707 06:16:00.064614 2694 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:16:00.064825 kubelet[2694]: I0707 06:16:00.064802 2694 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:16:00.066069 kubelet[2694]: I0707 06:16:00.066044 2694 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:16:00.067995 kubelet[2694]: I0707 06:16:00.067950 2694 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:16:00.073326 kubelet[2694]: I0707 06:16:00.073292 2694 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:16:00.078593 kubelet[2694]: I0707 06:16:00.078566 2694 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:16:00.078682 kubelet[2694]: I0707 06:16:00.078660 2694 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:16:00.078827 kubelet[2694]: I0707 06:16:00.078794 2694 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:16:00.079000 kubelet[2694]: I0707 06:16:00.078819 2694 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:16:00.079084 kubelet[2694]: I0707 06:16:00.079002 2694 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:16:00.079084 kubelet[2694]: I0707 06:16:00.079011 2694 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:16:00.079084 kubelet[2694]: I0707 06:16:00.079034 2694 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:00.079149 kubelet[2694]: I0707 06:16:00.079131 2694 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:16:00.079149 kubelet[2694]: I0707 06:16:00.079143 2694 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:16:00.079195 kubelet[2694]: I0707 06:16:00.079172 2694 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:16:00.079195 kubelet[2694]: I0707 06:16:00.079183 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:16:00.080296 kubelet[2694]: I0707 06:16:00.079596 2694 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:16:00.080296 kubelet[2694]: I0707 06:16:00.080036 2694 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:16:00.080738 kubelet[2694]: I0707 06:16:00.080712 2694 server.go:1274] "Started kubelet" Jul 7 06:16:00.080863 kubelet[2694]: I0707 06:16:00.080836 2694 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:16:00.081165 kubelet[2694]: I0707 06:16:00.081132 2694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:16:00.082417 kubelet[2694]: I0707 06:16:00.081547 2694 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:16:00.082417 kubelet[2694]: I0707 06:16:00.081888 2694 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:16:00.084101 kubelet[2694]: I0707 06:16:00.084088 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:16:00.093365 kubelet[2694]: I0707 06:16:00.093294 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:16:00.093999 kubelet[2694]: I0707 06:16:00.093984 2694 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:16:00.096358 kubelet[2694]: I0707 06:16:00.096345 2694 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:16:00.096615 kubelet[2694]: I0707 06:16:00.096605 2694 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:16:00.097244 kubelet[2694]: I0707 06:16:00.097230 2694 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:16:00.097454 kubelet[2694]: I0707 06:16:00.097436 2694 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:16:00.098480 kubelet[2694]: E0707 06:16:00.098446 2694 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:16:00.099700 kubelet[2694]: I0707 06:16:00.099685 2694 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:16:00.101230 kubelet[2694]: I0707 06:16:00.101194 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:16:00.102498 kubelet[2694]: I0707 06:16:00.102460 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:16:00.102498 kubelet[2694]: I0707 06:16:00.102497 2694 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:16:00.102579 kubelet[2694]: I0707 06:16:00.102518 2694 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:16:00.102579 kubelet[2694]: E0707 06:16:00.102561 2694 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:16:00.134610 kubelet[2694]: I0707 06:16:00.134579 2694 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:16:00.134610 kubelet[2694]: I0707 06:16:00.134599 2694 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:16:00.134610 kubelet[2694]: I0707 06:16:00.134618 2694 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:00.134784 kubelet[2694]: I0707 06:16:00.134764 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:16:00.134784 kubelet[2694]: I0707 06:16:00.134772 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:16:00.134824 kubelet[2694]: I0707 06:16:00.134789 2694 policy_none.go:49] "None policy: Start" Jul 7 06:16:00.135510 kubelet[2694]: I0707 06:16:00.135474 2694 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:16:00.135567 kubelet[2694]: I0707 06:16:00.135540 2694 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:16:00.135752 kubelet[2694]: I0707 06:16:00.135733 2694 state_mem.go:75] "Updated machine memory state" Jul 7 06:16:00.141279 kubelet[2694]: I0707 06:16:00.141258 2694 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:16:00.141551 kubelet[2694]: I0707 06:16:00.141436 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:16:00.141551 kubelet[2694]: I0707 06:16:00.141452 2694 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:16:00.141744 kubelet[2694]: I0707 06:16:00.141660 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:16:00.244919 kubelet[2694]: I0707 06:16:00.244883 2694 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:16:00.251147 kubelet[2694]: I0707 06:16:00.251097 2694 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 06:16:00.251215 kubelet[2694]: I0707 06:16:00.251209 2694 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:16:00.297877 kubelet[2694]: I0707 06:16:00.297690 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:16:00.297877 kubelet[2694]: I0707 06:16:00.297739 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:16:00.297877 kubelet[2694]: I0707 06:16:00.297778 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:16:00.297877 kubelet[2694]: I0707 06:16:00.297812 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:16:00.297877 kubelet[2694]: I0707 06:16:00.297847 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:16:00.298157 kubelet[2694]: I0707 06:16:00.297879 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:16:00.298157 kubelet[2694]: I0707 06:16:00.297906 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:16:00.298157 kubelet[2694]: I0707 06:16:00.297922 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39b01e1174f21096ff2efe04dee408a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39b01e1174f21096ff2efe04dee408a5\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:16:00.298157 kubelet[2694]: I0707 06:16:00.297940 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:16:01.080076 kubelet[2694]: I0707 06:16:01.080039 2694 apiserver.go:52] "Watching apiserver" Jul 7 06:16:01.097461 kubelet[2694]: I0707 06:16:01.097421 2694 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:16:01.147130 kubelet[2694]: I0707 06:16:01.147052 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.147036843 podStartE2EDuration="1.147036843s" podCreationTimestamp="2025-07-07 06:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:01.134434451 +0000 UTC m=+1.116404131" watchObservedRunningTime="2025-07-07 06:16:01.147036843 +0000 UTC m=+1.129006523" Jul 7 06:16:01.154930 kubelet[2694]: I0707 06:16:01.154667 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.154646702 podStartE2EDuration="1.154646702s" podCreationTimestamp="2025-07-07 06:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:01.147170009 +0000 UTC m=+1.129139689" watchObservedRunningTime="2025-07-07 06:16:01.154646702 +0000 UTC m=+1.136616372" Jul 7 06:16:01.154930 kubelet[2694]: I0707 06:16:01.154811 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.154806529 podStartE2EDuration="1.154806529s" podCreationTimestamp="2025-07-07 06:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:01.1545363 +0000 UTC m=+1.136505980" watchObservedRunningTime="2025-07-07 06:16:01.154806529 +0000 UTC m=+1.136776209" Jul 7 06:16:04.139279 kubelet[2694]: I0707 06:16:04.139242 2694 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:16:04.139852 containerd[1566]: time="2025-07-07T06:16:04.139769331Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:16:04.140621 kubelet[2694]: I0707 06:16:04.140201 2694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:16:04.961029 systemd[1]: Created slice kubepods-besteffort-pod192b2a71_4ce4_4b2e_b99c_8a764f510624.slice - libcontainer container kubepods-besteffort-pod192b2a71_4ce4_4b2e_b99c_8a764f510624.slice. Jul 7 06:16:05.032638 kubelet[2694]: I0707 06:16:05.032595 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/192b2a71-4ce4-4b2e-b99c-8a764f510624-kube-proxy\") pod \"kube-proxy-jmh89\" (UID: \"192b2a71-4ce4-4b2e-b99c-8a764f510624\") " pod="kube-system/kube-proxy-jmh89" Jul 7 06:16:05.032782 kubelet[2694]: I0707 06:16:05.032650 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/192b2a71-4ce4-4b2e-b99c-8a764f510624-xtables-lock\") pod \"kube-proxy-jmh89\" (UID: \"192b2a71-4ce4-4b2e-b99c-8a764f510624\") " pod="kube-system/kube-proxy-jmh89" Jul 7 06:16:05.032782 kubelet[2694]: I0707 06:16:05.032674 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/192b2a71-4ce4-4b2e-b99c-8a764f510624-lib-modules\") pod \"kube-proxy-jmh89\" (UID: \"192b2a71-4ce4-4b2e-b99c-8a764f510624\") " pod="kube-system/kube-proxy-jmh89" Jul 7 06:16:05.032782 kubelet[2694]: I0707 06:16:05.032698 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrplf\" (UniqueName: \"kubernetes.io/projected/192b2a71-4ce4-4b2e-b99c-8a764f510624-kube-api-access-xrplf\") pod \"kube-proxy-jmh89\" (UID: \"192b2a71-4ce4-4b2e-b99c-8a764f510624\") " pod="kube-system/kube-proxy-jmh89" Jul 7 06:16:05.270404 containerd[1566]: time="2025-07-07T06:16:05.270243066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmh89,Uid:192b2a71-4ce4-4b2e-b99c-8a764f510624,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:05.273609 systemd[1]: Created slice kubepods-besteffort-pod0923ba34_c94d_47f9_b967_0a3ff0d29692.slice - libcontainer container kubepods-besteffort-pod0923ba34_c94d_47f9_b967_0a3ff0d29692.slice. Jul 7 06:16:05.302368 containerd[1566]: time="2025-07-07T06:16:05.302284607Z" level=info msg="connecting to shim 997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b" address="unix:///run/containerd/s/90c461fe2a3f13f8acbd31194ec48588dfb9694f6367c00423d1511fad5c6f31" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:05.332461 systemd[1]: Started cri-containerd-997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b.scope - libcontainer container 997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b. Jul 7 06:16:05.334589 kubelet[2694]: I0707 06:16:05.334453 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0923ba34-c94d-47f9-b967-0a3ff0d29692-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-89rd9\" (UID: \"0923ba34-c94d-47f9-b967-0a3ff0d29692\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-89rd9" Jul 7 06:16:05.334589 kubelet[2694]: I0707 06:16:05.334522 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxtx\" (UniqueName: \"kubernetes.io/projected/0923ba34-c94d-47f9-b967-0a3ff0d29692-kube-api-access-8fxtx\") pod \"tigera-operator-5bf8dfcb4-89rd9\" (UID: \"0923ba34-c94d-47f9-b967-0a3ff0d29692\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-89rd9" Jul 7 06:16:05.359215 containerd[1566]: time="2025-07-07T06:16:05.359149555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmh89,Uid:192b2a71-4ce4-4b2e-b99c-8a764f510624,Namespace:kube-system,Attempt:0,} returns sandbox id \"997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b\"" Jul 7 06:16:05.361963 containerd[1566]: time="2025-07-07T06:16:05.361932720Z" level=info msg="CreateContainer within sandbox \"997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:16:05.372830 containerd[1566]: time="2025-07-07T06:16:05.372768404Z" level=info msg="Container c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:05.377280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319294230.mount: Deactivated successfully. Jul 7 06:16:05.381685 containerd[1566]: time="2025-07-07T06:16:05.381641569Z" level=info msg="CreateContainer within sandbox \"997f826ba7ef7c0cbbc428a3b8f7f35dab3e1027e697c62a3c033f4692a9878b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245\"" Jul 7 06:16:05.382393 containerd[1566]: time="2025-07-07T06:16:05.382071391Z" level=info msg="StartContainer for \"c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245\"" Jul 7 06:16:05.383805 containerd[1566]: time="2025-07-07T06:16:05.383779993Z" level=info msg="connecting to shim c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245" address="unix:///run/containerd/s/90c461fe2a3f13f8acbd31194ec48588dfb9694f6367c00423d1511fad5c6f31" protocol=ttrpc version=3 Jul 7 06:16:05.405473 systemd[1]: Started cri-containerd-c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245.scope - libcontainer container c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245. Jul 7 06:16:05.447746 containerd[1566]: time="2025-07-07T06:16:05.447691809Z" level=info msg="StartContainer for \"c1763f1bf695d2af64b2a769a9396d62342e3292d4683936e3b064a241438245\" returns successfully" Jul 7 06:16:05.578127 containerd[1566]: time="2025-07-07T06:16:05.578001299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-89rd9,Uid:0923ba34-c94d-47f9-b967-0a3ff0d29692,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:16:05.792671 containerd[1566]: time="2025-07-07T06:16:05.792614097Z" level=info msg="connecting to shim 3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811" address="unix:///run/containerd/s/4ae34c41f5002688f6f819609ff39afd7ad9b48ea2698f0d4aefe1727842369e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:05.816468 systemd[1]: Started cri-containerd-3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811.scope - libcontainer container 3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811. Jul 7 06:16:05.882558 containerd[1566]: time="2025-07-07T06:16:05.882499717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-89rd9,Uid:0923ba34-c94d-47f9-b967-0a3ff0d29692,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811\"" Jul 7 06:16:05.884137 containerd[1566]: time="2025-07-07T06:16:05.884108870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:16:06.147150 kubelet[2694]: I0707 06:16:06.146283 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmh89" podStartSLOduration=2.146260738 podStartE2EDuration="2.146260738s" podCreationTimestamp="2025-07-07 06:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:06.145981735 +0000 UTC m=+6.127951415" watchObservedRunningTime="2025-07-07 06:16:06.146260738 +0000 UTC m=+6.128230418" Jul 7 06:16:07.196934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35001196.mount: Deactivated successfully. Jul 7 06:16:07.543335 containerd[1566]: time="2025-07-07T06:16:07.543209650Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.543895 containerd[1566]: time="2025-07-07T06:16:07.543849258Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:16:07.545083 containerd[1566]: time="2025-07-07T06:16:07.545047152Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.547026 containerd[1566]: time="2025-07-07T06:16:07.546969525Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:07.547513 containerd[1566]: time="2025-07-07T06:16:07.547479817Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.663345088s" Jul 7 06:16:07.547553 containerd[1566]: time="2025-07-07T06:16:07.547517970Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:16:07.553066 containerd[1566]: time="2025-07-07T06:16:07.553041236Z" level=info msg="CreateContainer within sandbox \"3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:16:07.562209 containerd[1566]: time="2025-07-07T06:16:07.562177907Z" level=info msg="Container a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:07.573255 containerd[1566]: time="2025-07-07T06:16:07.573207286Z" level=info msg="CreateContainer within sandbox \"3fd3149b0fa376ff1b29e021e9da927bce32185234bfccdfc147822407f32811\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef\"" Jul 7 06:16:07.573781 containerd[1566]: time="2025-07-07T06:16:07.573725974Z" level=info msg="StartContainer for \"a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef\"" Jul 7 06:16:07.574523 containerd[1566]: time="2025-07-07T06:16:07.574487135Z" level=info msg="connecting to shim a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef" address="unix:///run/containerd/s/4ae34c41f5002688f6f819609ff39afd7ad9b48ea2698f0d4aefe1727842369e" protocol=ttrpc version=3 Jul 7 06:16:07.629451 systemd[1]: Started cri-containerd-a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef.scope - libcontainer container a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef. Jul 7 06:16:07.658837 containerd[1566]: time="2025-07-07T06:16:07.658786062Z" level=info msg="StartContainer for \"a85cfca14d15e94ddbdab778ffea4f87ee0285ffd4d517fa7944945007d3c6ef\" returns successfully" Jul 7 06:16:09.810900 kubelet[2694]: I0707 06:16:09.810821 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-89rd9" podStartSLOduration=3.143148618 podStartE2EDuration="4.810801417s" podCreationTimestamp="2025-07-07 06:16:05 +0000 UTC" firstStartedPulling="2025-07-07 06:16:05.88377374 +0000 UTC m=+5.865743420" lastFinishedPulling="2025-07-07 06:16:07.551426549 +0000 UTC m=+7.533396219" observedRunningTime="2025-07-07 06:16:08.146706843 +0000 UTC m=+8.128676533" watchObservedRunningTime="2025-07-07 06:16:09.810801417 +0000 UTC m=+9.792771097" Jul 7 06:16:11.965467 update_engine[1541]: I20250707 06:16:11.965390 1541 update_attempter.cc:509] Updating boot flags... Jul 7 06:16:13.909470 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:13.912367 sshd[1767]: Connection closed by 10.0.0.1 port 60676 Jul 7 06:16:13.912790 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:13.916873 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:60676.service: Deactivated successfully. Jul 7 06:16:13.919493 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:16:13.919862 systemd[1]: session-7.scope: Consumed 4.849s CPU time, 227.1M memory peak. Jul 7 06:16:13.923201 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:16:13.924475 systemd-logind[1538]: Removed session 7. Jul 7 06:16:17.305381 systemd[1]: Created slice kubepods-besteffort-poda85bfe7e_8b17_42d9_b265_8a6c5a34b4a5.slice - libcontainer container kubepods-besteffort-poda85bfe7e_8b17_42d9_b265_8a6c5a34b4a5.slice. Jul 7 06:16:17.308957 kubelet[2694]: I0707 06:16:17.308929 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5-typha-certs\") pod \"calico-typha-5bb6c5c448-5pbwb\" (UID: \"a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5\") " pod="calico-system/calico-typha-5bb6c5c448-5pbwb" Jul 7 06:16:17.309565 kubelet[2694]: I0707 06:16:17.309295 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpzn\" (UniqueName: \"kubernetes.io/projected/a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5-kube-api-access-qkpzn\") pod \"calico-typha-5bb6c5c448-5pbwb\" (UID: \"a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5\") " pod="calico-system/calico-typha-5bb6c5c448-5pbwb" Jul 7 06:16:17.309565 kubelet[2694]: I0707 06:16:17.309354 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5-tigera-ca-bundle\") pod \"calico-typha-5bb6c5c448-5pbwb\" (UID: \"a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5\") " pod="calico-system/calico-typha-5bb6c5c448-5pbwb" Jul 7 06:16:17.374340 systemd[1]: Created slice kubepods-besteffort-pod4e5f85b5_cd7b_4f46_a6fd_cd0b846d2b63.slice - libcontainer container kubepods-besteffort-pod4e5f85b5_cd7b_4f46_a6fd_cd0b846d2b63.slice. Jul 7 06:16:17.410566 kubelet[2694]: I0707 06:16:17.410207 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-cni-bin-dir\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410566 kubelet[2694]: I0707 06:16:17.410261 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-cni-net-dir\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410566 kubelet[2694]: I0707 06:16:17.410278 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-flexvol-driver-host\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410566 kubelet[2694]: I0707 06:16:17.410304 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-var-lib-calico\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410566 kubelet[2694]: I0707 06:16:17.410366 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-xtables-lock\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410831 kubelet[2694]: I0707 06:16:17.410382 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-var-run-calico\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410831 kubelet[2694]: I0707 06:16:17.410396 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzj9b\" (UniqueName: \"kubernetes.io/projected/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-kube-api-access-pzj9b\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410831 kubelet[2694]: I0707 06:16:17.410437 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-cni-log-dir\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410831 kubelet[2694]: I0707 06:16:17.410453 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-tigera-ca-bundle\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410831 kubelet[2694]: I0707 06:16:17.410478 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-policysync\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410938 kubelet[2694]: I0707 06:16:17.410524 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-lib-modules\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.410938 kubelet[2694]: I0707 06:16:17.410563 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63-node-certs\") pod \"calico-node-8kzf8\" (UID: \"4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63\") " pod="calico-system/calico-node-8kzf8" Jul 7 06:16:17.476025 kubelet[2694]: E0707 06:16:17.475962 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:17.511614 kubelet[2694]: I0707 06:16:17.511551 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2be6cb3c-5acb-4657-8b32-4bff02f0153a-socket-dir\") pod \"csi-node-driver-rvh2j\" (UID: \"2be6cb3c-5acb-4657-8b32-4bff02f0153a\") " pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:17.511614 kubelet[2694]: I0707 06:16:17.511598 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2be6cb3c-5acb-4657-8b32-4bff02f0153a-varrun\") pod \"csi-node-driver-rvh2j\" (UID: \"2be6cb3c-5acb-4657-8b32-4bff02f0153a\") " pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:17.511784 kubelet[2694]: I0707 06:16:17.511685 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2be6cb3c-5acb-4657-8b32-4bff02f0153a-kubelet-dir\") pod \"csi-node-driver-rvh2j\" (UID: \"2be6cb3c-5acb-4657-8b32-4bff02f0153a\") " pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:17.511784 kubelet[2694]: I0707 06:16:17.511701 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2be6cb3c-5acb-4657-8b32-4bff02f0153a-registration-dir\") pod \"csi-node-driver-rvh2j\" (UID: \"2be6cb3c-5acb-4657-8b32-4bff02f0153a\") " pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:17.511784 kubelet[2694]: I0707 06:16:17.511758 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkv6\" (UniqueName: \"kubernetes.io/projected/2be6cb3c-5acb-4657-8b32-4bff02f0153a-kube-api-access-9tkv6\") pod \"csi-node-driver-rvh2j\" (UID: \"2be6cb3c-5acb-4657-8b32-4bff02f0153a\") " pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:17.516912 kubelet[2694]: E0707 06:16:17.516809 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.516912 kubelet[2694]: W0707 06:16:17.516835 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.516912 kubelet[2694]: E0707 06:16:17.516866 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.519606 kubelet[2694]: E0707 06:16:17.519566 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.519606 kubelet[2694]: W0707 06:16:17.519577 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.519606 kubelet[2694]: E0707 06:16:17.519587 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.611589 containerd[1566]: time="2025-07-07T06:16:17.611538401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb6c5c448-5pbwb,Uid:a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:17.613095 kubelet[2694]: E0707 06:16:17.613065 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.613095 kubelet[2694]: W0707 06:16:17.613092 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.613188 kubelet[2694]: E0707 06:16:17.613115 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.613395 kubelet[2694]: E0707 06:16:17.613378 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.613395 kubelet[2694]: W0707 06:16:17.613390 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.613471 kubelet[2694]: E0707 06:16:17.613419 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.613704 kubelet[2694]: E0707 06:16:17.613686 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.613704 kubelet[2694]: W0707 06:16:17.613698 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.613828 kubelet[2694]: E0707 06:16:17.613737 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.614058 kubelet[2694]: E0707 06:16:17.613999 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.614058 kubelet[2694]: W0707 06:16:17.614022 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.614058 kubelet[2694]: E0707 06:16:17.614050 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.614298 kubelet[2694]: E0707 06:16:17.614282 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.614298 kubelet[2694]: W0707 06:16:17.614292 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.614376 kubelet[2694]: E0707 06:16:17.614339 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.614604 kubelet[2694]: E0707 06:16:17.614587 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.614604 kubelet[2694]: W0707 06:16:17.614598 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.614684 kubelet[2694]: E0707 06:16:17.614641 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.614828 kubelet[2694]: E0707 06:16:17.614806 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.614828 kubelet[2694]: W0707 06:16:17.614821 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.614939 kubelet[2694]: E0707 06:16:17.614862 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.615072 kubelet[2694]: E0707 06:16:17.615052 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.615118 kubelet[2694]: W0707 06:16:17.615070 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.615239 kubelet[2694]: E0707 06:16:17.615211 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.615414 kubelet[2694]: E0707 06:16:17.615293 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.615414 kubelet[2694]: W0707 06:16:17.615299 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.615414 kubelet[2694]: E0707 06:16:17.615355 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.615523 kubelet[2694]: E0707 06:16:17.615507 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.615523 kubelet[2694]: W0707 06:16:17.615518 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.615573 kubelet[2694]: E0707 06:16:17.615547 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.615688 kubelet[2694]: E0707 06:16:17.615672 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.615688 kubelet[2694]: W0707 06:16:17.615682 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.615741 kubelet[2694]: E0707 06:16:17.615696 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.615865 kubelet[2694]: E0707 06:16:17.615851 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.615865 kubelet[2694]: W0707 06:16:17.615860 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.615915 kubelet[2694]: E0707 06:16:17.615873 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.616136 kubelet[2694]: E0707 06:16:17.616109 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.616136 kubelet[2694]: W0707 06:16:17.616122 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.616292 kubelet[2694]: E0707 06:16:17.616163 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.616385 kubelet[2694]: E0707 06:16:17.616363 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.616410 kubelet[2694]: W0707 06:16:17.616382 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.616432 kubelet[2694]: E0707 06:16:17.616422 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.616668 kubelet[2694]: E0707 06:16:17.616652 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.616668 kubelet[2694]: W0707 06:16:17.616664 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.616728 kubelet[2694]: E0707 06:16:17.616700 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.616871 kubelet[2694]: E0707 06:16:17.616855 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.616871 kubelet[2694]: W0707 06:16:17.616869 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.616924 kubelet[2694]: E0707 06:16:17.616903 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.617058 kubelet[2694]: E0707 06:16:17.617043 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.617058 kubelet[2694]: W0707 06:16:17.617053 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.617171 kubelet[2694]: E0707 06:16:17.617082 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.617223 kubelet[2694]: E0707 06:16:17.617208 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.617223 kubelet[2694]: W0707 06:16:17.617219 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.617272 kubelet[2694]: E0707 06:16:17.617232 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.617453 kubelet[2694]: E0707 06:16:17.617436 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.617453 kubelet[2694]: W0707 06:16:17.617448 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.617530 kubelet[2694]: E0707 06:16:17.617465 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.617701 kubelet[2694]: E0707 06:16:17.617676 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.617701 kubelet[2694]: W0707 06:16:17.617688 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.617701 kubelet[2694]: E0707 06:16:17.617701 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.617891 kubelet[2694]: E0707 06:16:17.617875 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.617891 kubelet[2694]: W0707 06:16:17.617886 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.617956 kubelet[2694]: E0707 06:16:17.617899 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.618090 kubelet[2694]: E0707 06:16:17.618075 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.618090 kubelet[2694]: W0707 06:16:17.618085 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.618135 kubelet[2694]: E0707 06:16:17.618099 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.618391 kubelet[2694]: E0707 06:16:17.618291 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.618391 kubelet[2694]: W0707 06:16:17.618305 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.618391 kubelet[2694]: E0707 06:16:17.618345 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.618622 kubelet[2694]: E0707 06:16:17.618604 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.618622 kubelet[2694]: W0707 06:16:17.618616 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.618677 kubelet[2694]: E0707 06:16:17.618626 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.619039 kubelet[2694]: E0707 06:16:17.619021 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.619039 kubelet[2694]: W0707 06:16:17.619031 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.619039 kubelet[2694]: E0707 06:16:17.619040 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.626301 kubelet[2694]: E0707 06:16:17.626264 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:17.626301 kubelet[2694]: W0707 06:16:17.626296 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:17.626392 kubelet[2694]: E0707 06:16:17.626336 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:17.650425 containerd[1566]: time="2025-07-07T06:16:17.650370425Z" level=info msg="connecting to shim b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704" address="unix:///run/containerd/s/4f83d6d53aaa190f72bd904efaa68f680fdf0e3c12a7e61a386ce318151ffc68" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:17.678363 containerd[1566]: time="2025-07-07T06:16:17.678296631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kzf8,Uid:4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:17.683500 systemd[1]: Started cri-containerd-b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704.scope - libcontainer container b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704. Jul 7 06:16:17.698406 containerd[1566]: time="2025-07-07T06:16:17.698277878Z" level=info msg="connecting to shim 8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb" address="unix:///run/containerd/s/31f5d75065f965846d99cd489cef4997641bedee32b354fa41a9c5adde736cb4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:17.723705 systemd[1]: Started cri-containerd-8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb.scope - libcontainer container 8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb. Jul 7 06:16:17.729445 containerd[1566]: time="2025-07-07T06:16:17.729330787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb6c5c448-5pbwb,Uid:a85bfe7e-8b17-42d9-b265-8a6c5a34b4a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704\"" Jul 7 06:16:17.731995 containerd[1566]: time="2025-07-07T06:16:17.731608185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:16:17.756436 containerd[1566]: time="2025-07-07T06:16:17.756388440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kzf8,Uid:4e5f85b5-cd7b-4f46-a6fd-cd0b846d2b63,Namespace:calico-system,Attempt:0,} returns sandbox id \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\"" Jul 7 06:16:19.103330 kubelet[2694]: E0707 06:16:19.103280 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:19.108112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142128642.mount: Deactivated successfully. Jul 7 06:16:19.540600 containerd[1566]: time="2025-07-07T06:16:19.540498966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:19.541451 containerd[1566]: time="2025-07-07T06:16:19.541411591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:16:19.542489 containerd[1566]: time="2025-07-07T06:16:19.542450494Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:19.544255 containerd[1566]: time="2025-07-07T06:16:19.544204268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:19.544704 containerd[1566]: time="2025-07-07T06:16:19.544660130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.813023751s" Jul 7 06:16:19.544704 containerd[1566]: time="2025-07-07T06:16:19.544685929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:16:19.545394 containerd[1566]: time="2025-07-07T06:16:19.545371244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:16:19.552644 containerd[1566]: time="2025-07-07T06:16:19.552584144Z" level=info msg="CreateContainer within sandbox \"b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:16:19.560178 containerd[1566]: time="2025-07-07T06:16:19.560135053Z" level=info msg="Container 2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:19.568057 containerd[1566]: time="2025-07-07T06:16:19.568031085Z" level=info msg="CreateContainer within sandbox \"b4c835bd9d1daed26561f04a23eb5a17775dc19e8961bcdde36cdea28bb38704\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978\"" Jul 7 06:16:19.568494 containerd[1566]: time="2025-07-07T06:16:19.568446830Z" level=info msg="StartContainer for \"2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978\"" Jul 7 06:16:19.569487 containerd[1566]: time="2025-07-07T06:16:19.569455105Z" level=info msg="connecting to shim 2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978" address="unix:///run/containerd/s/4f83d6d53aaa190f72bd904efaa68f680fdf0e3c12a7e61a386ce318151ffc68" protocol=ttrpc version=3 Jul 7 06:16:19.592584 systemd[1]: Started cri-containerd-2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978.scope - libcontainer container 2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978. Jul 7 06:16:19.644863 containerd[1566]: time="2025-07-07T06:16:19.644823451Z" level=info msg="StartContainer for \"2521fc93da0b91166929e636cf229fe3fccb3ef7c0870296c5acd0d6a180c978\" returns successfully" Jul 7 06:16:20.171576 kubelet[2694]: I0707 06:16:20.171511 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb6c5c448-5pbwb" podStartSLOduration=1.357508889 podStartE2EDuration="3.171496943s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="2025-07-07 06:16:17.731267821 +0000 UTC m=+17.713237501" lastFinishedPulling="2025-07-07 06:16:19.545255875 +0000 UTC m=+19.527225555" observedRunningTime="2025-07-07 06:16:20.171159476 +0000 UTC m=+20.153129156" watchObservedRunningTime="2025-07-07 06:16:20.171496943 +0000 UTC m=+20.153466623" Jul 7 06:16:20.216068 kubelet[2694]: E0707 06:16:20.216038 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.216068 kubelet[2694]: W0707 06:16:20.216057 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.216210 kubelet[2694]: E0707 06:16:20.216075 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.216352 kubelet[2694]: E0707 06:16:20.216334 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.216352 kubelet[2694]: W0707 06:16:20.216346 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.216409 kubelet[2694]: E0707 06:16:20.216355 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.216560 kubelet[2694]: E0707 06:16:20.216534 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.216560 kubelet[2694]: W0707 06:16:20.216545 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.216560 kubelet[2694]: E0707 06:16:20.216554 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.216724 kubelet[2694]: E0707 06:16:20.216707 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.216724 kubelet[2694]: W0707 06:16:20.216717 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.216724 kubelet[2694]: E0707 06:16:20.216724 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.216976 kubelet[2694]: E0707 06:16:20.216948 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.216976 kubelet[2694]: W0707 06:16:20.216970 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217032 kubelet[2694]: E0707 06:16:20.216992 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.217174 kubelet[2694]: E0707 06:16:20.217155 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.217174 kubelet[2694]: W0707 06:16:20.217165 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217174 kubelet[2694]: E0707 06:16:20.217173 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.217372 kubelet[2694]: E0707 06:16:20.217345 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.217372 kubelet[2694]: W0707 06:16:20.217357 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217372 kubelet[2694]: E0707 06:16:20.217365 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.217591 kubelet[2694]: E0707 06:16:20.217556 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.217591 kubelet[2694]: W0707 06:16:20.217565 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217591 kubelet[2694]: E0707 06:16:20.217575 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.217760 kubelet[2694]: E0707 06:16:20.217744 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.217760 kubelet[2694]: W0707 06:16:20.217754 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217819 kubelet[2694]: E0707 06:16:20.217761 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.217935 kubelet[2694]: E0707 06:16:20.217921 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.217935 kubelet[2694]: W0707 06:16:20.217931 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.217987 kubelet[2694]: E0707 06:16:20.217938 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.218119 kubelet[2694]: E0707 06:16:20.218096 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.218119 kubelet[2694]: W0707 06:16:20.218109 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.218119 kubelet[2694]: E0707 06:16:20.218117 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.218287 kubelet[2694]: E0707 06:16:20.218273 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.218287 kubelet[2694]: W0707 06:16:20.218282 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.218350 kubelet[2694]: E0707 06:16:20.218291 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.218500 kubelet[2694]: E0707 06:16:20.218484 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.218500 kubelet[2694]: W0707 06:16:20.218494 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.218551 kubelet[2694]: E0707 06:16:20.218502 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.218670 kubelet[2694]: E0707 06:16:20.218656 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.218670 kubelet[2694]: W0707 06:16:20.218665 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.218719 kubelet[2694]: E0707 06:16:20.218673 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.218848 kubelet[2694]: E0707 06:16:20.218832 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.218848 kubelet[2694]: W0707 06:16:20.218842 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.218895 kubelet[2694]: E0707 06:16:20.218849 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.233252 kubelet[2694]: E0707 06:16:20.233210 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.233252 kubelet[2694]: W0707 06:16:20.233229 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.233387 kubelet[2694]: E0707 06:16:20.233269 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.233556 kubelet[2694]: E0707 06:16:20.233538 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.233556 kubelet[2694]: W0707 06:16:20.233550 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.233654 kubelet[2694]: E0707 06:16:20.233563 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.233786 kubelet[2694]: E0707 06:16:20.233761 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.233786 kubelet[2694]: W0707 06:16:20.233775 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.233845 kubelet[2694]: E0707 06:16:20.233791 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.234005 kubelet[2694]: E0707 06:16:20.233987 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.234005 kubelet[2694]: W0707 06:16:20.233998 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.234082 kubelet[2694]: E0707 06:16:20.234010 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.234259 kubelet[2694]: E0707 06:16:20.234230 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.234259 kubelet[2694]: W0707 06:16:20.234245 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.234259 kubelet[2694]: E0707 06:16:20.234263 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.234603 kubelet[2694]: E0707 06:16:20.234583 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.234635 kubelet[2694]: W0707 06:16:20.234618 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.234635 kubelet[2694]: E0707 06:16:20.234629 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.235117 kubelet[2694]: E0707 06:16:20.235091 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.235117 kubelet[2694]: W0707 06:16:20.235112 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.235209 kubelet[2694]: E0707 06:16:20.235135 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.235413 kubelet[2694]: E0707 06:16:20.235396 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.235413 kubelet[2694]: W0707 06:16:20.235408 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.235487 kubelet[2694]: E0707 06:16:20.235424 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.235616 kubelet[2694]: E0707 06:16:20.235602 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.235616 kubelet[2694]: W0707 06:16:20.235611 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.235657 kubelet[2694]: E0707 06:16:20.235624 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.235797 kubelet[2694]: E0707 06:16:20.235783 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.235797 kubelet[2694]: W0707 06:16:20.235792 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.235842 kubelet[2694]: E0707 06:16:20.235805 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236003 kubelet[2694]: E0707 06:16:20.235989 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236003 kubelet[2694]: W0707 06:16:20.235999 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.236050 kubelet[2694]: E0707 06:16:20.236025 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236195 kubelet[2694]: E0707 06:16:20.236177 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236195 kubelet[2694]: W0707 06:16:20.236190 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.236244 kubelet[2694]: E0707 06:16:20.236215 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236387 kubelet[2694]: E0707 06:16:20.236372 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236387 kubelet[2694]: W0707 06:16:20.236383 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.236436 kubelet[2694]: E0707 06:16:20.236399 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236585 kubelet[2694]: E0707 06:16:20.236570 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236585 kubelet[2694]: W0707 06:16:20.236581 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.236632 kubelet[2694]: E0707 06:16:20.236594 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236765 kubelet[2694]: E0707 06:16:20.236750 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236765 kubelet[2694]: W0707 06:16:20.236760 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.236821 kubelet[2694]: E0707 06:16:20.236767 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.236961 kubelet[2694]: E0707 06:16:20.236944 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.236961 kubelet[2694]: W0707 06:16:20.236957 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.237011 kubelet[2694]: E0707 06:16:20.236967 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.237165 kubelet[2694]: E0707 06:16:20.237149 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.237165 kubelet[2694]: W0707 06:16:20.237159 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.237216 kubelet[2694]: E0707 06:16:20.237167 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.237609 kubelet[2694]: E0707 06:16:20.237584 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:16:20.237609 kubelet[2694]: W0707 06:16:20.237596 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:16:20.237609 kubelet[2694]: E0707 06:16:20.237604 2694 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:16:20.844611 containerd[1566]: time="2025-07-07T06:16:20.844567123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:20.845332 containerd[1566]: time="2025-07-07T06:16:20.845278316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:16:20.846513 containerd[1566]: time="2025-07-07T06:16:20.846441754Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:20.848500 containerd[1566]: time="2025-07-07T06:16:20.848471337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:20.849024 containerd[1566]: time="2025-07-07T06:16:20.848988895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.303591413s" Jul 7 06:16:20.849060 containerd[1566]: time="2025-07-07T06:16:20.849028039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:16:20.851051 containerd[1566]: time="2025-07-07T06:16:20.851017197Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:16:20.858280 containerd[1566]: time="2025-07-07T06:16:20.858244367Z" level=info msg="Container 2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:20.865904 containerd[1566]: time="2025-07-07T06:16:20.865860634Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\"" Jul 7 06:16:20.866479 containerd[1566]: time="2025-07-07T06:16:20.866442051Z" level=info msg="StartContainer for \"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\"" Jul 7 06:16:20.867762 containerd[1566]: time="2025-07-07T06:16:20.867733110Z" level=info msg="connecting to shim 2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92" address="unix:///run/containerd/s/31f5d75065f965846d99cd489cef4997641bedee32b354fa41a9c5adde736cb4" protocol=ttrpc version=3 Jul 7 06:16:20.891540 systemd[1]: Started cri-containerd-2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92.scope - libcontainer container 2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92. Jul 7 06:16:20.943675 systemd[1]: cri-containerd-2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92.scope: Deactivated successfully. Jul 7 06:16:20.945204 containerd[1566]: time="2025-07-07T06:16:20.945167474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\" id:\"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\" pid:3349 exited_at:{seconds:1751868980 nanos:944730258}" Jul 7 06:16:21.103107 kubelet[2694]: E0707 06:16:21.102985 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:21.146930 containerd[1566]: time="2025-07-07T06:16:21.146861471Z" level=info msg="received exit event container_id:\"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\" id:\"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\" pid:3349 exited_at:{seconds:1751868980 nanos:944730258}" Jul 7 06:16:21.149426 containerd[1566]: time="2025-07-07T06:16:21.149387650Z" level=info msg="StartContainer for \"2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92\" returns successfully" Jul 7 06:16:21.166824 kubelet[2694]: I0707 06:16:21.166780 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:21.172342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f4ea3fa4ea3b8d400a76d4502ee42d8ab92fc3d04887118d8252d8b0b09ad92-rootfs.mount: Deactivated successfully. Jul 7 06:16:22.171111 containerd[1566]: time="2025-07-07T06:16:22.171072411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:16:23.102842 kubelet[2694]: E0707 06:16:23.102778 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:23.379987 kubelet[2694]: I0707 06:16:23.379830 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:25.102964 kubelet[2694]: E0707 06:16:25.102909 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:25.702345 containerd[1566]: time="2025-07-07T06:16:25.702273644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:25.703295 containerd[1566]: time="2025-07-07T06:16:25.703243703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:16:25.704586 containerd[1566]: time="2025-07-07T06:16:25.704559544Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:25.706567 containerd[1566]: time="2025-07-07T06:16:25.706529188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:25.707035 containerd[1566]: time="2025-07-07T06:16:25.707009733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.535904611s" Jul 7 06:16:25.707075 containerd[1566]: time="2025-07-07T06:16:25.707035432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:16:25.708917 containerd[1566]: time="2025-07-07T06:16:25.708889227Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:16:25.716281 containerd[1566]: time="2025-07-07T06:16:25.716229225Z" level=info msg="Container c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:25.725272 containerd[1566]: time="2025-07-07T06:16:25.725233943Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\"" Jul 7 06:16:25.725717 containerd[1566]: time="2025-07-07T06:16:25.725683931Z" level=info msg="StartContainer for \"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\"" Jul 7 06:16:25.726946 containerd[1566]: time="2025-07-07T06:16:25.726920782Z" level=info msg="connecting to shim c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5" address="unix:///run/containerd/s/31f5d75065f965846d99cd489cef4997641bedee32b354fa41a9c5adde736cb4" protocol=ttrpc version=3 Jul 7 06:16:25.750528 systemd[1]: Started cri-containerd-c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5.scope - libcontainer container c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5. Jul 7 06:16:25.793292 containerd[1566]: time="2025-07-07T06:16:25.793250657Z" level=info msg="StartContainer for \"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\" returns successfully" Jul 7 06:16:27.016780 containerd[1566]: time="2025-07-07T06:16:27.016728912Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:16:27.019528 systemd[1]: cri-containerd-c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5.scope: Deactivated successfully. Jul 7 06:16:27.019852 systemd[1]: cri-containerd-c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5.scope: Consumed 602ms CPU time, 178.8M memory peak, 3.9M read from disk, 171.2M written to disk. Jul 7 06:16:27.020304 containerd[1566]: time="2025-07-07T06:16:27.020274573Z" level=info msg="received exit event container_id:\"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\" id:\"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\" pid:3411 exited_at:{seconds:1751868987 nanos:20073283}" Jul 7 06:16:27.020593 containerd[1566]: time="2025-07-07T06:16:27.020377005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\" id:\"c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5\" pid:3411 exited_at:{seconds:1751868987 nanos:20073283}" Jul 7 06:16:27.043040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2eaaad8feed80196e57fcf97559c3e1e198dcb6f69c71a702529a6c1357c0f5-rootfs.mount: Deactivated successfully. Jul 7 06:16:27.102795 kubelet[2694]: E0707 06:16:27.102746 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:27.106164 kubelet[2694]: I0707 06:16:27.106104 2694 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:16:27.137702 systemd[1]: Created slice kubepods-burstable-pod354724b8_ee59_4fbc_9a2c_ebe256163c13.slice - libcontainer container kubepods-burstable-pod354724b8_ee59_4fbc_9a2c_ebe256163c13.slice. Jul 7 06:16:27.146879 systemd[1]: Created slice kubepods-burstable-podc10f9320_730f_40aa_b8e2_f257850dcd64.slice - libcontainer container kubepods-burstable-podc10f9320_730f_40aa_b8e2_f257850dcd64.slice. Jul 7 06:16:27.153172 systemd[1]: Created slice kubepods-besteffort-pod28530f90_2b99_4c44_b06c_650b4e581a76.slice - libcontainer container kubepods-besteffort-pod28530f90_2b99_4c44_b06c_650b4e581a76.slice. Jul 7 06:16:27.159541 systemd[1]: Created slice kubepods-besteffort-pod931790e6_7d8d_492a_87d6_fe9dd38cd815.slice - libcontainer container kubepods-besteffort-pod931790e6_7d8d_492a_87d6_fe9dd38cd815.slice. Jul 7 06:16:27.166024 systemd[1]: Created slice kubepods-besteffort-pod94843d6e_8a43_40d5_85da_6bbccab96ca4.slice - libcontainer container kubepods-besteffort-pod94843d6e_8a43_40d5_85da_6bbccab96ca4.slice. Jul 7 06:16:27.174957 systemd[1]: Created slice kubepods-besteffort-pode8085134_b468_4922_a478_2eea18059602.slice - libcontainer container kubepods-besteffort-pode8085134_b468_4922_a478_2eea18059602.slice. Jul 7 06:16:27.180729 systemd[1]: Created slice kubepods-besteffort-pod7aaa5a8e_e6af_4596_a4f9_90e0e93c84e0.slice - libcontainer container kubepods-besteffort-pod7aaa5a8e_e6af_4596_a4f9_90e0e93c84e0.slice. Jul 7 06:16:27.185897 kubelet[2694]: I0707 06:16:27.185859 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0-config\") pod \"goldmane-58fd7646b9-58qzk\" (UID: \"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0\") " pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.185978 kubelet[2694]: I0707 06:16:27.185897 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6kxd\" (UniqueName: \"kubernetes.io/projected/94843d6e-8a43-40d5-85da-6bbccab96ca4-kube-api-access-h6kxd\") pod \"calico-kube-controllers-6f447d75cf-cfxdd\" (UID: \"94843d6e-8a43-40d5-85da-6bbccab96ca4\") " pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" Jul 7 06:16:27.185978 kubelet[2694]: I0707 06:16:27.185921 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94843d6e-8a43-40d5-85da-6bbccab96ca4-tigera-ca-bundle\") pod \"calico-kube-controllers-6f447d75cf-cfxdd\" (UID: \"94843d6e-8a43-40d5-85da-6bbccab96ca4\") " pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" Jul 7 06:16:27.185978 kubelet[2694]: I0707 06:16:27.185940 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-backend-key-pair\") pod \"whisker-558b775f5d-l7pg8\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " pod="calico-system/whisker-558b775f5d-l7pg8" Jul 7 06:16:27.185978 kubelet[2694]: I0707 06:16:27.185955 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkkz\" (UniqueName: \"kubernetes.io/projected/931790e6-7d8d-492a-87d6-fe9dd38cd815-kube-api-access-8kkkz\") pod \"whisker-558b775f5d-l7pg8\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " pod="calico-system/whisker-558b775f5d-l7pg8" Jul 7 06:16:27.185978 kubelet[2694]: I0707 06:16:27.185971 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28530f90-2b99-4c44-b06c-650b4e581a76-calico-apiserver-certs\") pod \"calico-apiserver-7c6c4fc68-ftmpb\" (UID: \"28530f90-2b99-4c44-b06c-650b4e581a76\") " pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" Jul 7 06:16:27.186124 kubelet[2694]: I0707 06:16:27.185985 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dw5\" (UniqueName: \"kubernetes.io/projected/28530f90-2b99-4c44-b06c-650b4e581a76-kube-api-access-w8dw5\") pod \"calico-apiserver-7c6c4fc68-ftmpb\" (UID: \"28530f90-2b99-4c44-b06c-650b4e581a76\") " pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" Jul 7 06:16:27.186124 kubelet[2694]: I0707 06:16:27.185999 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/354724b8-ee59-4fbc-9a2c-ebe256163c13-config-volume\") pod \"coredns-7c65d6cfc9-zlvw8\" (UID: \"354724b8-ee59-4fbc-9a2c-ebe256163c13\") " pod="kube-system/coredns-7c65d6cfc9-zlvw8" Jul 7 06:16:27.186124 kubelet[2694]: I0707 06:16:27.186013 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8fz\" (UniqueName: \"kubernetes.io/projected/7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0-kube-api-access-2b8fz\") pod \"goldmane-58fd7646b9-58qzk\" (UID: \"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0\") " pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.186124 kubelet[2694]: I0707 06:16:27.186027 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c10f9320-730f-40aa-b8e2-f257850dcd64-config-volume\") pod \"coredns-7c65d6cfc9-88nxr\" (UID: \"c10f9320-730f-40aa-b8e2-f257850dcd64\") " pod="kube-system/coredns-7c65d6cfc9-88nxr" Jul 7 06:16:27.186124 kubelet[2694]: I0707 06:16:27.186042 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0-goldmane-key-pair\") pod \"goldmane-58fd7646b9-58qzk\" (UID: \"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0\") " pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.186258 kubelet[2694]: I0707 06:16:27.186056 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-ca-bundle\") pod \"whisker-558b775f5d-l7pg8\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " pod="calico-system/whisker-558b775f5d-l7pg8" Jul 7 06:16:27.186258 kubelet[2694]: I0707 06:16:27.186070 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8085134-b468-4922-a478-2eea18059602-calico-apiserver-certs\") pod \"calico-apiserver-7c6c4fc68-kmbdk\" (UID: \"e8085134-b468-4922-a478-2eea18059602\") " pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" Jul 7 06:16:27.186258 kubelet[2694]: I0707 06:16:27.186130 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m9c\" (UniqueName: \"kubernetes.io/projected/e8085134-b468-4922-a478-2eea18059602-kube-api-access-x8m9c\") pod \"calico-apiserver-7c6c4fc68-kmbdk\" (UID: \"e8085134-b468-4922-a478-2eea18059602\") " pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" Jul 7 06:16:27.186258 kubelet[2694]: I0707 06:16:27.186148 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-58qzk\" (UID: \"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0\") " pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.186258 kubelet[2694]: I0707 06:16:27.186188 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2jmc\" (UniqueName: \"kubernetes.io/projected/354724b8-ee59-4fbc-9a2c-ebe256163c13-kube-api-access-c2jmc\") pod \"coredns-7c65d6cfc9-zlvw8\" (UID: \"354724b8-ee59-4fbc-9a2c-ebe256163c13\") " pod="kube-system/coredns-7c65d6cfc9-zlvw8" Jul 7 06:16:27.186418 containerd[1566]: time="2025-07-07T06:16:27.186211313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:16:27.186470 kubelet[2694]: I0707 06:16:27.186244 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ngr\" (UniqueName: \"kubernetes.io/projected/c10f9320-730f-40aa-b8e2-f257850dcd64-kube-api-access-m6ngr\") pod \"coredns-7c65d6cfc9-88nxr\" (UID: \"c10f9320-730f-40aa-b8e2-f257850dcd64\") " pod="kube-system/coredns-7c65d6cfc9-88nxr" Jul 7 06:16:27.443695 containerd[1566]: time="2025-07-07T06:16:27.443651736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlvw8,Uid:354724b8-ee59-4fbc-9a2c-ebe256163c13,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:27.450427 containerd[1566]: time="2025-07-07T06:16:27.450381487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88nxr,Uid:c10f9320-730f-40aa-b8e2-f257850dcd64,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:27.458170 containerd[1566]: time="2025-07-07T06:16:27.458122653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-ftmpb,Uid:28530f90-2b99-4c44-b06c-650b4e581a76,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:27.462537 containerd[1566]: time="2025-07-07T06:16:27.462502045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b775f5d-l7pg8,Uid:931790e6-7d8d-492a-87d6-fe9dd38cd815,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:27.472600 containerd[1566]: time="2025-07-07T06:16:27.472528165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f447d75cf-cfxdd,Uid:94843d6e-8a43-40d5-85da-6bbccab96ca4,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:27.481660 containerd[1566]: time="2025-07-07T06:16:27.481452280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-kmbdk,Uid:e8085134-b468-4922-a478-2eea18059602,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:27.486611 containerd[1566]: time="2025-07-07T06:16:27.486589801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-58qzk,Uid:7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:27.556758 containerd[1566]: time="2025-07-07T06:16:27.556705453Z" level=error msg="Failed to destroy network for sandbox \"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.568177 containerd[1566]: time="2025-07-07T06:16:27.568003722Z" level=error msg="Failed to destroy network for sandbox \"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.569407 containerd[1566]: time="2025-07-07T06:16:27.568111956Z" level=error msg="Failed to destroy network for sandbox \"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.569915 containerd[1566]: time="2025-07-07T06:16:27.569803663Z" level=error msg="Failed to destroy network for sandbox \"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.576098 containerd[1566]: time="2025-07-07T06:16:27.576050644Z" level=error msg="Failed to destroy network for sandbox \"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.586592 containerd[1566]: time="2025-07-07T06:16:27.586426004Z" level=error msg="Failed to destroy network for sandbox \"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.587922 containerd[1566]: time="2025-07-07T06:16:27.587861669Z" level=error msg="Failed to destroy network for sandbox \"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.590592 containerd[1566]: time="2025-07-07T06:16:27.590536128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-ftmpb,Uid:28530f90-2b99-4c44-b06c-650b4e581a76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.590805 containerd[1566]: time="2025-07-07T06:16:27.590753057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f447d75cf-cfxdd,Uid:94843d6e-8a43-40d5-85da-6bbccab96ca4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.600942 kubelet[2694]: E0707 06:16:27.600892 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.601015 kubelet[2694]: E0707 06:16:27.600977 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" Jul 7 06:16:27.601015 kubelet[2694]: E0707 06:16:27.600999 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" Jul 7 06:16:27.601066 kubelet[2694]: E0707 06:16:27.600902 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.601066 kubelet[2694]: E0707 06:16:27.601046 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f447d75cf-cfxdd_calico-system(94843d6e-8a43-40d5-85da-6bbccab96ca4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f447d75cf-cfxdd_calico-system(94843d6e-8a43-40d5-85da-6bbccab96ca4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a4999825b9f143c47169d740aed213bcc015c307d71974cae4846d6451834a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" podUID="94843d6e-8a43-40d5-85da-6bbccab96ca4" Jul 7 06:16:27.601139 kubelet[2694]: E0707 06:16:27.601073 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" Jul 7 06:16:27.601139 kubelet[2694]: E0707 06:16:27.601097 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" Jul 7 06:16:27.601183 kubelet[2694]: E0707 06:16:27.601139 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c6c4fc68-ftmpb_calico-apiserver(28530f90-2b99-4c44-b06c-650b4e581a76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c6c4fc68-ftmpb_calico-apiserver(28530f90-2b99-4c44-b06c-650b4e581a76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfb180faf81da10786b5c9b1e53f82466957607d419f7febfb4c9f9e950d1735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" podUID="28530f90-2b99-4c44-b06c-650b4e581a76" Jul 7 06:16:27.644756 containerd[1566]: time="2025-07-07T06:16:27.644708109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88nxr,Uid:c10f9320-730f-40aa-b8e2-f257850dcd64,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.644966 kubelet[2694]: E0707 06:16:27.644922 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.645017 kubelet[2694]: E0707 06:16:27.644986 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-88nxr" Jul 7 06:16:27.645017 kubelet[2694]: E0707 06:16:27.645007 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-88nxr" Jul 7 06:16:27.645091 kubelet[2694]: E0707 06:16:27.645059 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-88nxr_kube-system(c10f9320-730f-40aa-b8e2-f257850dcd64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-88nxr_kube-system(c10f9320-730f-40aa-b8e2-f257850dcd64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d3b47f0e696f58161e2736bf2627c05285bf3117b7ca530f1533ea7625d8e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-88nxr" podUID="c10f9320-730f-40aa-b8e2-f257850dcd64" Jul 7 06:16:27.785523 containerd[1566]: time="2025-07-07T06:16:27.785394293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlvw8,Uid:354724b8-ee59-4fbc-9a2c-ebe256163c13,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.785672 kubelet[2694]: E0707 06:16:27.785632 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.786047 kubelet[2694]: E0707 06:16:27.785846 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zlvw8" Jul 7 06:16:27.786047 kubelet[2694]: E0707 06:16:27.785928 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zlvw8" Jul 7 06:16:27.786047 kubelet[2694]: E0707 06:16:27.786010 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zlvw8_kube-system(354724b8-ee59-4fbc-9a2c-ebe256163c13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zlvw8_kube-system(354724b8-ee59-4fbc-9a2c-ebe256163c13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af1f929a24a7cabfd6b906498a3f87b09c9c5c1c60160aa7bcc0ad80224ae8e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zlvw8" podUID="354724b8-ee59-4fbc-9a2c-ebe256163c13" Jul 7 06:16:27.788172 containerd[1566]: time="2025-07-07T06:16:27.788032503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b775f5d-l7pg8,Uid:931790e6-7d8d-492a-87d6-fe9dd38cd815,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.788760 kubelet[2694]: E0707 06:16:27.788298 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.789031 kubelet[2694]: E0707 06:16:27.788774 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b775f5d-l7pg8" Jul 7 06:16:27.789031 kubelet[2694]: E0707 06:16:27.788847 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b775f5d-l7pg8" Jul 7 06:16:27.789031 kubelet[2694]: E0707 06:16:27.788934 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-558b775f5d-l7pg8_calico-system(931790e6-7d8d-492a-87d6-fe9dd38cd815)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-558b775f5d-l7pg8_calico-system(931790e6-7d8d-492a-87d6-fe9dd38cd815)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21e6da951e1a7daee63c60dc0d0f613d24d883c052f724be7dee78973b7f204e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558b775f5d-l7pg8" podUID="931790e6-7d8d-492a-87d6-fe9dd38cd815" Jul 7 06:16:27.789384 containerd[1566]: time="2025-07-07T06:16:27.789350757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-58qzk,Uid:7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.789550 kubelet[2694]: E0707 06:16:27.789513 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.789586 kubelet[2694]: E0707 06:16:27.789556 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.789586 kubelet[2694]: E0707 06:16:27.789571 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-58qzk" Jul 7 06:16:27.789653 kubelet[2694]: E0707 06:16:27.789596 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-58qzk_calico-system(7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-58qzk_calico-system(7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b35db01777eec326c706aab65cf6f1b067ced801b604c6a80a6a0bb7fe5680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-58qzk" podUID="7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0" Jul 7 06:16:27.790610 containerd[1566]: time="2025-07-07T06:16:27.790541761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-kmbdk,Uid:e8085134-b468-4922-a478-2eea18059602,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.790893 kubelet[2694]: E0707 06:16:27.790769 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:27.790893 kubelet[2694]: E0707 06:16:27.790813 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" Jul 7 06:16:27.790893 kubelet[2694]: E0707 06:16:27.790831 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" Jul 7 06:16:27.791283 kubelet[2694]: E0707 06:16:27.790868 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c6c4fc68-kmbdk_calico-apiserver(e8085134-b468-4922-a478-2eea18059602)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c6c4fc68-kmbdk_calico-apiserver(e8085134-b468-4922-a478-2eea18059602)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"449d670263bb776164243554fb156e3c8394c2b2d345ff8e672f2638757713e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" podUID="e8085134-b468-4922-a478-2eea18059602" Jul 7 06:16:29.113366 systemd[1]: Created slice kubepods-besteffort-pod2be6cb3c_5acb_4657_8b32_4bff02f0153a.slice - libcontainer container kubepods-besteffort-pod2be6cb3c_5acb_4657_8b32_4bff02f0153a.slice. Jul 7 06:16:29.117659 containerd[1566]: time="2025-07-07T06:16:29.116894214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvh2j,Uid:2be6cb3c-5acb-4657-8b32-4bff02f0153a,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:29.312824 containerd[1566]: time="2025-07-07T06:16:29.312774952Z" level=error msg="Failed to destroy network for sandbox \"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:29.315247 systemd[1]: run-netns-cni\x2d3de54077\x2d4e6f\x2da195\x2dce7d\x2dff638092bd43.mount: Deactivated successfully. Jul 7 06:16:29.317465 containerd[1566]: time="2025-07-07T06:16:29.317415221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvh2j,Uid:2be6cb3c-5acb-4657-8b32-4bff02f0153a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:29.317666 kubelet[2694]: E0707 06:16:29.317628 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:16:29.317974 kubelet[2694]: E0707 06:16:29.317688 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:29.317974 kubelet[2694]: E0707 06:16:29.317713 2694 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rvh2j" Jul 7 06:16:29.317974 kubelet[2694]: E0707 06:16:29.317763 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rvh2j_calico-system(2be6cb3c-5acb-4657-8b32-4bff02f0153a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rvh2j_calico-system(2be6cb3c-5acb-4657-8b32-4bff02f0153a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823179ed8098151aa74cb7dea4e1a01f3345415831c9ab2ef440c2e4051eb503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rvh2j" podUID="2be6cb3c-5acb-4657-8b32-4bff02f0153a" Jul 7 06:16:32.333258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2879630308.mount: Deactivated successfully. Jul 7 06:16:33.654154 containerd[1566]: time="2025-07-07T06:16:33.654093185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:33.655114 containerd[1566]: time="2025-07-07T06:16:33.655084420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:16:33.656584 containerd[1566]: time="2025-07-07T06:16:33.656534238Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:33.665108 containerd[1566]: time="2025-07-07T06:16:33.665060711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:33.665858 containerd[1566]: time="2025-07-07T06:16:33.665821943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.47957885s" Jul 7 06:16:33.665909 containerd[1566]: time="2025-07-07T06:16:33.665854245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:16:33.674982 containerd[1566]: time="2025-07-07T06:16:33.674934359Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:16:33.697556 containerd[1566]: time="2025-07-07T06:16:33.697510489Z" level=info msg="Container 8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:33.719736 containerd[1566]: time="2025-07-07T06:16:33.719689911Z" level=info msg="CreateContainer within sandbox \"8581d1a6cfa6fddd795a2cbaba939c85cd8eb29bcc97d29b379860626e691afb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\"" Jul 7 06:16:33.720154 containerd[1566]: time="2025-07-07T06:16:33.720131242Z" level=info msg="StartContainer for \"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\"" Jul 7 06:16:33.721641 containerd[1566]: time="2025-07-07T06:16:33.721615885Z" level=info msg="connecting to shim 8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1" address="unix:///run/containerd/s/31f5d75065f965846d99cd489cef4997641bedee32b354fa41a9c5adde736cb4" protocol=ttrpc version=3 Jul 7 06:16:33.747444 systemd[1]: Started cri-containerd-8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1.scope - libcontainer container 8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1. Jul 7 06:16:33.843020 containerd[1566]: time="2025-07-07T06:16:33.842977418Z" level=info msg="StartContainer for \"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\" returns successfully" Jul 7 06:16:33.865139 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:16:33.865837 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:16:34.086950 kubelet[2694]: I0707 06:16:34.086905 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-backend-key-pair\") pod \"931790e6-7d8d-492a-87d6-fe9dd38cd815\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " Jul 7 06:16:34.086950 kubelet[2694]: I0707 06:16:34.086950 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-ca-bundle\") pod \"931790e6-7d8d-492a-87d6-fe9dd38cd815\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " Jul 7 06:16:34.087475 kubelet[2694]: I0707 06:16:34.086989 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kkkz\" (UniqueName: \"kubernetes.io/projected/931790e6-7d8d-492a-87d6-fe9dd38cd815-kube-api-access-8kkkz\") pod \"931790e6-7d8d-492a-87d6-fe9dd38cd815\" (UID: \"931790e6-7d8d-492a-87d6-fe9dd38cd815\") " Jul 7 06:16:34.087548 kubelet[2694]: I0707 06:16:34.087495 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "931790e6-7d8d-492a-87d6-fe9dd38cd815" (UID: "931790e6-7d8d-492a-87d6-fe9dd38cd815"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:16:34.090545 kubelet[2694]: I0707 06:16:34.090520 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931790e6-7d8d-492a-87d6-fe9dd38cd815-kube-api-access-8kkkz" (OuterVolumeSpecName: "kube-api-access-8kkkz") pod "931790e6-7d8d-492a-87d6-fe9dd38cd815" (UID: "931790e6-7d8d-492a-87d6-fe9dd38cd815"). InnerVolumeSpecName "kube-api-access-8kkkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:16:34.090727 kubelet[2694]: I0707 06:16:34.090704 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "931790e6-7d8d-492a-87d6-fe9dd38cd815" (UID: "931790e6-7d8d-492a-87d6-fe9dd38cd815"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:16:34.116075 systemd[1]: Removed slice kubepods-besteffort-pod931790e6_7d8d_492a_87d6_fe9dd38cd815.slice - libcontainer container kubepods-besteffort-pod931790e6_7d8d_492a_87d6_fe9dd38cd815.slice. Jul 7 06:16:34.187804 kubelet[2694]: I0707 06:16:34.187749 2694 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:16:34.187804 kubelet[2694]: I0707 06:16:34.187788 2694 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931790e6-7d8d-492a-87d6-fe9dd38cd815-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:16:34.187804 kubelet[2694]: I0707 06:16:34.187797 2694 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kkkz\" (UniqueName: \"kubernetes.io/projected/931790e6-7d8d-492a-87d6-fe9dd38cd815-kube-api-access-8kkkz\") on node \"localhost\" DevicePath \"\"" Jul 7 06:16:34.242241 kubelet[2694]: I0707 06:16:34.242156 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8kzf8" podStartSLOduration=1.3332121 podStartE2EDuration="17.242136533s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="2025-07-07 06:16:17.757623446 +0000 UTC m=+17.739593126" lastFinishedPulling="2025-07-07 06:16:33.666547869 +0000 UTC m=+33.648517559" observedRunningTime="2025-07-07 06:16:34.236561555 +0000 UTC m=+34.218531226" watchObservedRunningTime="2025-07-07 06:16:34.242136533 +0000 UTC m=+34.224106214" Jul 7 06:16:34.261528 systemd[1]: Created slice kubepods-besteffort-pod7b022bd9_6af2_4ce2_97bc_c4ae50fce9f1.slice - libcontainer container kubepods-besteffort-pod7b022bd9_6af2_4ce2_97bc_c4ae50fce9f1.slice. Jul 7 06:16:34.288476 kubelet[2694]: I0707 06:16:34.288055 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ckn\" (UniqueName: \"kubernetes.io/projected/7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1-kube-api-access-v6ckn\") pod \"whisker-79d94995db-9r8ht\" (UID: \"7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1\") " pod="calico-system/whisker-79d94995db-9r8ht" Jul 7 06:16:34.288476 kubelet[2694]: I0707 06:16:34.288109 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1-whisker-ca-bundle\") pod \"whisker-79d94995db-9r8ht\" (UID: \"7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1\") " pod="calico-system/whisker-79d94995db-9r8ht" Jul 7 06:16:34.288476 kubelet[2694]: I0707 06:16:34.288147 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1-whisker-backend-key-pair\") pod \"whisker-79d94995db-9r8ht\" (UID: \"7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1\") " pod="calico-system/whisker-79d94995db-9r8ht" Jul 7 06:16:34.566229 containerd[1566]: time="2025-07-07T06:16:34.566108898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d94995db-9r8ht,Uid:7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:34.674228 systemd[1]: var-lib-kubelet-pods-931790e6\x2d7d8d\x2d492a\x2d87d6\x2dfe9dd38cd815-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8kkkz.mount: Deactivated successfully. Jul 7 06:16:34.674349 systemd[1]: var-lib-kubelet-pods-931790e6\x2d7d8d\x2d492a\x2d87d6\x2dfe9dd38cd815-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:16:34.704287 systemd-networkd[1488]: cali50cae13a154: Link UP Jul 7 06:16:34.705156 systemd-networkd[1488]: cali50cae13a154: Gained carrier Jul 7 06:16:34.719605 containerd[1566]: 2025-07-07 06:16:34.589 [INFO][3785] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:16:34.719605 containerd[1566]: 2025-07-07 06:16:34.605 [INFO][3785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79d94995db--9r8ht-eth0 whisker-79d94995db- calico-system 7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1 897 0 2025-07-07 06:16:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79d94995db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79d94995db-9r8ht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali50cae13a154 [] [] }} ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-" Jul 7 06:16:34.719605 containerd[1566]: 2025-07-07 06:16:34.606 [INFO][3785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.719605 containerd[1566]: 2025-07-07 06:16:34.663 [INFO][3801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" HandleID="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Workload="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.665 [INFO][3801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" HandleID="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Workload="localhost-k8s-whisker--79d94995db--9r8ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79d94995db-9r8ht", "timestamp":"2025-07-07 06:16:34.663734687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.665 [INFO][3801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.665 [INFO][3801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.665 [INFO][3801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.672 [INFO][3801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" host="localhost" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.678 [INFO][3801] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.682 [INFO][3801] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.683 [INFO][3801] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.685 [INFO][3801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:34.720014 containerd[1566]: 2025-07-07 06:16:34.685 [INFO][3801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" host="localhost" Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.686 [INFO][3801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38 Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.689 [INFO][3801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" host="localhost" Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.693 [INFO][3801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" host="localhost" Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.693 [INFO][3801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" host="localhost" Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.693 [INFO][3801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:34.720230 containerd[1566]: 2025-07-07 06:16:34.693 [INFO][3801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" HandleID="k8s-pod-network.78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Workload="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.720374 containerd[1566]: 2025-07-07 06:16:34.697 [INFO][3785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d94995db--9r8ht-eth0", GenerateName:"whisker-79d94995db-", Namespace:"calico-system", SelfLink:"", UID:"7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d94995db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79d94995db-9r8ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50cae13a154", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:34.720374 containerd[1566]: 2025-07-07 06:16:34.697 [INFO][3785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.720446 containerd[1566]: 2025-07-07 06:16:34.697 [INFO][3785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50cae13a154 ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.720446 containerd[1566]: 2025-07-07 06:16:34.707 [INFO][3785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.720483 containerd[1566]: 2025-07-07 06:16:34.707 [INFO][3785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79d94995db--9r8ht-eth0", GenerateName:"whisker-79d94995db-", Namespace:"calico-system", SelfLink:"", UID:"7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d94995db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38", Pod:"whisker-79d94995db-9r8ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50cae13a154", MAC:"52:a2:39:5a:ad:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:34.720537 containerd[1566]: 2025-07-07 06:16:34.716 [INFO][3785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" Namespace="calico-system" Pod="whisker-79d94995db-9r8ht" WorkloadEndpoint="localhost-k8s-whisker--79d94995db--9r8ht-eth0" Jul 7 06:16:34.907738 containerd[1566]: time="2025-07-07T06:16:34.907687895Z" level=info msg="connecting to shim 78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38" address="unix:///run/containerd/s/7e2e308c90fbb7c0c4a3680e2e4117188133d99827ba463e1022f574c251334b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:34.936476 systemd[1]: Started cri-containerd-78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38.scope - libcontainer container 78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38. Jul 7 06:16:34.949988 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:35.054386 containerd[1566]: time="2025-07-07T06:16:35.054344055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d94995db-9r8ht,Uid:7b022bd9-6af2-4ce2-97bc-c4ae50fce9f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38\"" Jul 7 06:16:35.055888 containerd[1566]: time="2025-07-07T06:16:35.055852433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:16:35.396934 containerd[1566]: time="2025-07-07T06:16:35.396704331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\" id:\"12a77af06aed7339d91626d59da91d2174f37d25e653d0ce35a7930bd1a4c855\" pid:3967 exit_status:1 exited_at:{seconds:1751868995 nanos:395583844}" Jul 7 06:16:35.661490 systemd-networkd[1488]: vxlan.calico: Link UP Jul 7 06:16:35.661505 systemd-networkd[1488]: vxlan.calico: Gained carrier Jul 7 06:16:36.107038 kubelet[2694]: I0707 06:16:36.106984 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931790e6-7d8d-492a-87d6-fe9dd38cd815" path="/var/lib/kubelet/pods/931790e6-7d8d-492a-87d6-fe9dd38cd815/volumes" Jul 7 06:16:36.295896 containerd[1566]: time="2025-07-07T06:16:36.295825857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\" id:\"b2dc1d1cdb779d31717f6ba5ee1f5878cd3656955f7debe9be285470a8f9f404\" pid:4098 exit_status:1 exited_at:{seconds:1751868996 nanos:295486288}" Jul 7 06:16:36.426377 containerd[1566]: time="2025-07-07T06:16:36.426228696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:36.427036 containerd[1566]: time="2025-07-07T06:16:36.426992332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:16:36.428095 containerd[1566]: time="2025-07-07T06:16:36.428064368Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:36.430019 containerd[1566]: time="2025-07-07T06:16:36.429988056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:36.430524 containerd[1566]: time="2025-07-07T06:16:36.430480011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.374597782s" Jul 7 06:16:36.430561 containerd[1566]: time="2025-07-07T06:16:36.430523293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:16:36.432513 containerd[1566]: time="2025-07-07T06:16:36.432482707Z" level=info msg="CreateContainer within sandbox \"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:16:36.442304 containerd[1566]: time="2025-07-07T06:16:36.441804919Z" level=info msg="Container 113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:36.448871 containerd[1566]: time="2025-07-07T06:16:36.448830361Z" level=info msg="CreateContainer within sandbox \"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537\"" Jul 7 06:16:36.449251 containerd[1566]: time="2025-07-07T06:16:36.449217770Z" level=info msg="StartContainer for \"113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537\"" Jul 7 06:16:36.450167 containerd[1566]: time="2025-07-07T06:16:36.450120037Z" level=info msg="connecting to shim 113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537" address="unix:///run/containerd/s/7e2e308c90fbb7c0c4a3680e2e4117188133d99827ba463e1022f574c251334b" protocol=ttrpc version=3 Jul 7 06:16:36.471444 systemd[1]: Started cri-containerd-113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537.scope - libcontainer container 113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537. Jul 7 06:16:36.517970 containerd[1566]: time="2025-07-07T06:16:36.517914412Z" level=info msg="StartContainer for \"113265de79ecd8570b4f1e57086d9812c4db6fd9e1ec69f965dac3142e13e537\" returns successfully" Jul 7 06:16:36.519298 containerd[1566]: time="2025-07-07T06:16:36.519251497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:16:36.597459 systemd-networkd[1488]: cali50cae13a154: Gained IPv6LL Jul 7 06:16:36.917493 systemd-networkd[1488]: vxlan.calico: Gained IPv6LL Jul 7 06:16:38.340304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578930969.mount: Deactivated successfully. Jul 7 06:16:38.358887 containerd[1566]: time="2025-07-07T06:16:38.358837059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:38.359933 containerd[1566]: time="2025-07-07T06:16:38.359887805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:16:38.361168 containerd[1566]: time="2025-07-07T06:16:38.361134068Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:38.363209 containerd[1566]: time="2025-07-07T06:16:38.363175576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:38.363804 containerd[1566]: time="2025-07-07T06:16:38.363773560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.844475696s" Jul 7 06:16:38.363845 containerd[1566]: time="2025-07-07T06:16:38.363802544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:16:38.366216 containerd[1566]: time="2025-07-07T06:16:38.366176738Z" level=info msg="CreateContainer within sandbox \"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:16:38.373745 containerd[1566]: time="2025-07-07T06:16:38.373711314Z" level=info msg="Container ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:38.382356 containerd[1566]: time="2025-07-07T06:16:38.382298649Z" level=info msg="CreateContainer within sandbox \"78282b52aee0fa1dd7e10236f99670088d44333126d3f473a8517d3183daaa38\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976\"" Jul 7 06:16:38.382730 containerd[1566]: time="2025-07-07T06:16:38.382698971Z" level=info msg="StartContainer for \"ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976\"" Jul 7 06:16:38.383735 containerd[1566]: time="2025-07-07T06:16:38.383701586Z" level=info msg="connecting to shim ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976" address="unix:///run/containerd/s/7e2e308c90fbb7c0c4a3680e2e4117188133d99827ba463e1022f574c251334b" protocol=ttrpc version=3 Jul 7 06:16:38.412450 systemd[1]: Started cri-containerd-ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976.scope - libcontainer container ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976. Jul 7 06:16:38.477793 containerd[1566]: time="2025-07-07T06:16:38.477750927Z" level=info msg="StartContainer for \"ce1394aa75d1630fe15fdc77c9c54dd8531754643a70cfb679f55d93a7d42976\" returns successfully" Jul 7 06:16:39.104001 containerd[1566]: time="2025-07-07T06:16:39.103940463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-kmbdk,Uid:e8085134-b468-4922-a478-2eea18059602,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:39.111993 containerd[1566]: time="2025-07-07T06:16:39.111951062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f447d75cf-cfxdd,Uid:94843d6e-8a43-40d5-85da-6bbccab96ca4,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:39.558545 kubelet[2694]: I0707 06:16:39.558167 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79d94995db-9r8ht" podStartSLOduration=2.249318797 podStartE2EDuration="5.558146631s" podCreationTimestamp="2025-07-07 06:16:34 +0000 UTC" firstStartedPulling="2025-07-07 06:16:35.05560176 +0000 UTC m=+35.037571430" lastFinishedPulling="2025-07-07 06:16:38.364429584 +0000 UTC m=+38.346399264" observedRunningTime="2025-07-07 06:16:39.557580477 +0000 UTC m=+39.539550157" watchObservedRunningTime="2025-07-07 06:16:39.558146631 +0000 UTC m=+39.540116301" Jul 7 06:16:39.767147 systemd-networkd[1488]: cali1d07b1b65d7: Link UP Jul 7 06:16:39.767364 systemd-networkd[1488]: cali1d07b1b65d7: Gained carrier Jul 7 06:16:39.779705 containerd[1566]: 2025-07-07 06:16:39.584 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0 calico-apiserver-7c6c4fc68- calico-apiserver e8085134-b468-4922-a478-2eea18059602 830 0 2025-07-07 06:16:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c6c4fc68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c6c4fc68-kmbdk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d07b1b65d7 [] [] }} ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-" Jul 7 06:16:39.779705 containerd[1566]: 2025-07-07 06:16:39.584 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.779705 containerd[1566]: 2025-07-07 06:16:39.729 [INFO][4225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" HandleID="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.729 [INFO][4225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" HandleID="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b8480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c6c4fc68-kmbdk", "timestamp":"2025-07-07 06:16:39.729774202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.730 [INFO][4225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.730 [INFO][4225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.730 [INFO][4225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.736 [INFO][4225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" host="localhost" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.741 [INFO][4225] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.745 [INFO][4225] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.746 [INFO][4225] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.748 [INFO][4225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:39.780290 containerd[1566]: 2025-07-07 06:16:39.748 [INFO][4225] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" host="localhost" Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.752 [INFO][4225] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.755 [INFO][4225] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" host="localhost" Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4225] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" host="localhost" Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" host="localhost" Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:39.780809 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" HandleID="k8s-pod-network.a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.780957 containerd[1566]: 2025-07-07 06:16:39.764 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0", GenerateName:"calico-apiserver-7c6c4fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8085134-b468-4922-a478-2eea18059602", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6c4fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c6c4fc68-kmbdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d07b1b65d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:39.781015 containerd[1566]: 2025-07-07 06:16:39.764 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.781015 containerd[1566]: 2025-07-07 06:16:39.764 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d07b1b65d7 ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.781015 containerd[1566]: 2025-07-07 06:16:39.768 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.781090 containerd[1566]: 2025-07-07 06:16:39.769 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0", GenerateName:"calico-apiserver-7c6c4fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8085134-b468-4922-a478-2eea18059602", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6c4fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f", Pod:"calico-apiserver-7c6c4fc68-kmbdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d07b1b65d7", MAC:"92:82:c2:5d:22:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:39.781142 containerd[1566]: 2025-07-07 06:16:39.776 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-kmbdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--kmbdk-eth0" Jul 7 06:16:39.807570 containerd[1566]: time="2025-07-07T06:16:39.807525089Z" level=info msg="connecting to shim a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f" address="unix:///run/containerd/s/afc15496e1b0c95852ffe5165bafca4212c5a09e51353d7e97ebb38cd9cb5038" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:39.830182 systemd[1]: Started cri-containerd-a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f.scope - libcontainer container a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f. Jul 7 06:16:39.844617 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:39.880899 systemd-networkd[1488]: cali8b984b71c0c: Link UP Jul 7 06:16:39.881126 systemd-networkd[1488]: cali8b984b71c0c: Gained carrier Jul 7 06:16:39.905042 containerd[1566]: 2025-07-07 06:16:39.730 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0 calico-kube-controllers-6f447d75cf- calico-system 94843d6e-8a43-40d5-85da-6bbccab96ca4 833 0 2025-07-07 06:16:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f447d75cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f447d75cf-cfxdd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8b984b71c0c [] [] }} ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-" Jul 7 06:16:39.905042 containerd[1566]: 2025-07-07 06:16:39.730 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.905042 containerd[1566]: 2025-07-07 06:16:39.760 [INFO][4236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" HandleID="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Workload="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.761 [INFO][4236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" HandleID="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Workload="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f447d75cf-cfxdd", "timestamp":"2025-07-07 06:16:39.760930029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.761 [INFO][4236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.762 [INFO][4236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.835 [INFO][4236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" host="localhost" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.841 [INFO][4236] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.846 [INFO][4236] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.852 [INFO][4236] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.856 [INFO][4236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:39.905271 containerd[1566]: 2025-07-07 06:16:39.856 [INFO][4236] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" host="localhost" Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.859 [INFO][4236] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.865 [INFO][4236] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" host="localhost" Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.873 [INFO][4236] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" host="localhost" Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.873 [INFO][4236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" host="localhost" Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.873 [INFO][4236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:39.905500 containerd[1566]: 2025-07-07 06:16:39.873 [INFO][4236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" HandleID="k8s-pod-network.d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Workload="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.905619 containerd[1566]: 2025-07-07 06:16:39.878 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0", GenerateName:"calico-kube-controllers-6f447d75cf-", Namespace:"calico-system", SelfLink:"", UID:"94843d6e-8a43-40d5-85da-6bbccab96ca4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f447d75cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f447d75cf-cfxdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b984b71c0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:39.905670 containerd[1566]: 2025-07-07 06:16:39.878 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.905670 containerd[1566]: 2025-07-07 06:16:39.878 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b984b71c0c ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.905670 containerd[1566]: 2025-07-07 06:16:39.880 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.906419 containerd[1566]: 2025-07-07 06:16:39.884 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0", GenerateName:"calico-kube-controllers-6f447d75cf-", Namespace:"calico-system", SelfLink:"", UID:"94843d6e-8a43-40d5-85da-6bbccab96ca4", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f447d75cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c", Pod:"calico-kube-controllers-6f447d75cf-cfxdd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b984b71c0c", MAC:"56:f4:91:1e:23:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:39.906482 containerd[1566]: 2025-07-07 06:16:39.901 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" Namespace="calico-system" Pod="calico-kube-controllers-6f447d75cf-cfxdd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f447d75cf--cfxdd-eth0" Jul 7 06:16:39.925170 containerd[1566]: time="2025-07-07T06:16:39.925111521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-kmbdk,Uid:e8085134-b468-4922-a478-2eea18059602,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f\"" Jul 7 06:16:39.927509 containerd[1566]: time="2025-07-07T06:16:39.927020821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:16:39.949356 containerd[1566]: time="2025-07-07T06:16:39.949230551Z" level=info msg="connecting to shim d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c" address="unix:///run/containerd/s/eca3daa3fd1d17efc0038619783c7d783f23c56e680ca55f979aaf0eed8e3883" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:39.971439 systemd[1]: Started cri-containerd-d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c.scope - libcontainer container d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c. Jul 7 06:16:39.984017 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:40.012822 containerd[1566]: time="2025-07-07T06:16:40.012784749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f447d75cf-cfxdd,Uid:94843d6e-8a43-40d5-85da-6bbccab96ca4,Namespace:calico-system,Attempt:0,} returns sandbox id \"d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c\"" Jul 7 06:16:40.104352 containerd[1566]: time="2025-07-07T06:16:40.104272949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-ftmpb,Uid:28530f90-2b99-4c44-b06c-650b4e581a76,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:16:40.104526 containerd[1566]: time="2025-07-07T06:16:40.104341959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlvw8,Uid:354724b8-ee59-4fbc-9a2c-ebe256163c13,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:40.204213 systemd-networkd[1488]: cali30dc3f09ef3: Link UP Jul 7 06:16:40.204746 systemd-networkd[1488]: cali30dc3f09ef3: Gained carrier Jul 7 06:16:40.506153 containerd[1566]: 2025-07-07 06:16:40.139 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0 calico-apiserver-7c6c4fc68- calico-apiserver 28530f90-2b99-4c44-b06c-650b4e581a76 832 0 2025-07-07 06:16:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c6c4fc68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c6c4fc68-ftmpb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30dc3f09ef3 [] [] }} ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-" Jul 7 06:16:40.506153 containerd[1566]: 2025-07-07 06:16:40.139 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.506153 containerd[1566]: 2025-07-07 06:16:40.169 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" HandleID="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.169 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" HandleID="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c6c4fc68-ftmpb", "timestamp":"2025-07-07 06:16:40.16939386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.169 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.169 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.169 [INFO][4393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.177 [INFO][4393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" host="localhost" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.182 [INFO][4393] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.185 [INFO][4393] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.186 [INFO][4393] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.188 [INFO][4393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:40.507213 containerd[1566]: 2025-07-07 06:16:40.188 [INFO][4393] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" host="localhost" Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.189 [INFO][4393] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.192 [INFO][4393] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" host="localhost" Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4393] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" host="localhost" Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" host="localhost" Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:40.507488 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" HandleID="k8s-pod-network.39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Workload="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.507629 containerd[1566]: 2025-07-07 06:16:40.201 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0", GenerateName:"calico-apiserver-7c6c4fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"28530f90-2b99-4c44-b06c-650b4e581a76", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6c4fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c6c4fc68-ftmpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dc3f09ef3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:40.507693 containerd[1566]: 2025-07-07 06:16:40.201 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.507693 containerd[1566]: 2025-07-07 06:16:40.201 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30dc3f09ef3 ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.507693 containerd[1566]: 2025-07-07 06:16:40.205 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.507766 containerd[1566]: 2025-07-07 06:16:40.206 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0", GenerateName:"calico-apiserver-7c6c4fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"28530f90-2b99-4c44-b06c-650b4e581a76", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c6c4fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b", Pod:"calico-apiserver-7c6c4fc68-ftmpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30dc3f09ef3", MAC:"36:03:5a:39:94:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:40.507827 containerd[1566]: 2025-07-07 06:16:40.502 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" Namespace="calico-apiserver" Pod="calico-apiserver-7c6c4fc68-ftmpb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c6c4fc68--ftmpb-eth0" Jul 7 06:16:40.547271 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:54192.service - OpenSSH per-connection server daemon (10.0.0.1:54192). Jul 7 06:16:40.586932 systemd-networkd[1488]: cali3679ba5f3a6: Link UP Jul 7 06:16:40.589496 systemd-networkd[1488]: cali3679ba5f3a6: Gained carrier Jul 7 06:16:40.607785 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 54192 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:40.630746 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:40.664867 systemd-logind[1538]: New session 8 of user core. Jul 7 06:16:40.674428 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:16:40.813113 containerd[1566]: 2025-07-07 06:16:40.141 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0 coredns-7c65d6cfc9- kube-system 354724b8-ee59-4fbc-9a2c-ebe256163c13 821 0 2025-07-07 06:16:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zlvw8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3679ba5f3a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-" Jul 7 06:16:40.813113 containerd[1566]: 2025-07-07 06:16:40.141 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.813113 containerd[1566]: 2025-07-07 06:16:40.171 [INFO][4395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" HandleID="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Workload="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.173 [INFO][4395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" HandleID="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Workload="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7070), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zlvw8", "timestamp":"2025-07-07 06:16:40.17114408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.173 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.197 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.500 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" host="localhost" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.525 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.529 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.531 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.532 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:40.813595 containerd[1566]: 2025-07-07 06:16:40.532 [INFO][4395] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" host="localhost" Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.534 [INFO][4395] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527 Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.574 [INFO][4395] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" host="localhost" Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.580 [INFO][4395] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" host="localhost" Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.580 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" host="localhost" Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.580 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:40.813886 containerd[1566]: 2025-07-07 06:16:40.580 [INFO][4395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" HandleID="k8s-pod-network.1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Workload="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.814048 containerd[1566]: 2025-07-07 06:16:40.584 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"354724b8-ee59-4fbc-9a2c-ebe256163c13", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zlvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3679ba5f3a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:40.814148 containerd[1566]: 2025-07-07 06:16:40.584 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.814148 containerd[1566]: 2025-07-07 06:16:40.584 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3679ba5f3a6 ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.814148 containerd[1566]: 2025-07-07 06:16:40.589 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.814238 containerd[1566]: 2025-07-07 06:16:40.589 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"354724b8-ee59-4fbc-9a2c-ebe256163c13", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527", Pod:"coredns-7c65d6cfc9-zlvw8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3679ba5f3a6", MAC:"8a:bf:be:55:59:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:40.814238 containerd[1566]: 2025-07-07 06:16:40.808 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zlvw8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zlvw8-eth0" Jul 7 06:16:40.823474 sshd[4424]: Connection closed by 10.0.0.1 port 54192 Jul 7 06:16:40.824199 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:40.827467 containerd[1566]: time="2025-07-07T06:16:40.827423507Z" level=info msg="connecting to shim 39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b" address="unix:///run/containerd/s/004c4c4c6f133e1147f8645330ab42a27e86cd57aad6883950124ab2b1d32922" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:40.828712 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:54192.service: Deactivated successfully. Jul 7 06:16:40.831952 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:16:40.842853 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:16:40.844123 systemd-logind[1538]: Removed session 8. Jul 7 06:16:40.856447 systemd[1]: Started cri-containerd-39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b.scope - libcontainer container 39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b. Jul 7 06:16:40.869451 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:40.908841 containerd[1566]: time="2025-07-07T06:16:40.908790393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c6c4fc68-ftmpb,Uid:28530f90-2b99-4c44-b06c-650b4e581a76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b\"" Jul 7 06:16:40.913704 containerd[1566]: time="2025-07-07T06:16:40.913658503Z" level=info msg="connecting to shim 1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527" address="unix:///run/containerd/s/44a997fb41630c8792a5accb412d9a8fe78f885bffe852a16ba454a477473f02" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:40.939444 systemd[1]: Started cri-containerd-1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527.scope - libcontainer container 1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527. Jul 7 06:16:40.949484 systemd-networkd[1488]: cali8b984b71c0c: Gained IPv6LL Jul 7 06:16:40.952402 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:40.984369 containerd[1566]: time="2025-07-07T06:16:40.984335998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zlvw8,Uid:354724b8-ee59-4fbc-9a2c-ebe256163c13,Namespace:kube-system,Attempt:0,} returns sandbox id \"1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527\"" Jul 7 06:16:40.986615 containerd[1566]: time="2025-07-07T06:16:40.986578112Z" level=info msg="CreateContainer within sandbox \"1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:16:40.995991 containerd[1566]: time="2025-07-07T06:16:40.995949857Z" level=info msg="Container 6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:41.001261 containerd[1566]: time="2025-07-07T06:16:41.001224671Z" level=info msg="CreateContainer within sandbox \"1737b4fd145fb41bc4b0a779cfac586f7d67d91521a4d00e3b7c8b515bfd3527\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27\"" Jul 7 06:16:41.001884 containerd[1566]: time="2025-07-07T06:16:41.001626397Z" level=info msg="StartContainer for \"6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27\"" Jul 7 06:16:41.002367 containerd[1566]: time="2025-07-07T06:16:41.002302397Z" level=info msg="connecting to shim 6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27" address="unix:///run/containerd/s/44a997fb41630c8792a5accb412d9a8fe78f885bffe852a16ba454a477473f02" protocol=ttrpc version=3 Jul 7 06:16:41.030466 systemd[1]: Started cri-containerd-6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27.scope - libcontainer container 6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27. Jul 7 06:16:41.062296 containerd[1566]: time="2025-07-07T06:16:41.062258246Z" level=info msg="StartContainer for \"6a530de867795848dc6789f033c3b041ac528907945d3b5a08740d090e64bd27\" returns successfully" Jul 7 06:16:41.240254 kubelet[2694]: I0707 06:16:41.240169 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zlvw8" podStartSLOduration=36.240150662 podStartE2EDuration="36.240150662s" podCreationTimestamp="2025-07-07 06:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:41.239582283 +0000 UTC m=+41.221551963" watchObservedRunningTime="2025-07-07 06:16:41.240150662 +0000 UTC m=+41.222120342" Jul 7 06:16:41.590523 systemd-networkd[1488]: cali30dc3f09ef3: Gained IPv6LL Jul 7 06:16:41.717487 systemd-networkd[1488]: cali1d07b1b65d7: Gained IPv6LL Jul 7 06:16:41.909831 systemd-networkd[1488]: cali3679ba5f3a6: Gained IPv6LL Jul 7 06:16:42.104741 containerd[1566]: time="2025-07-07T06:16:42.104697604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-58qzk,Uid:7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:42.300500 containerd[1566]: time="2025-07-07T06:16:42.300378571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:42.308423 containerd[1566]: time="2025-07-07T06:16:42.301142356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:16:42.308479 containerd[1566]: time="2025-07-07T06:16:42.302801835Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:42.308711 containerd[1566]: time="2025-07-07T06:16:42.305700812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.378654503s" Jul 7 06:16:42.308711 containerd[1566]: time="2025-07-07T06:16:42.308623645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:16:42.308945 containerd[1566]: time="2025-07-07T06:16:42.308914762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:42.310479 containerd[1566]: time="2025-07-07T06:16:42.310455648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:16:42.311380 containerd[1566]: time="2025-07-07T06:16:42.311354367Z" level=info msg="CreateContainer within sandbox \"a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:16:42.373507 containerd[1566]: time="2025-07-07T06:16:42.373411783Z" level=info msg="Container 370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:42.383742 systemd-networkd[1488]: calid5c7ece2a65: Link UP Jul 7 06:16:42.385238 systemd-networkd[1488]: calid5c7ece2a65: Gained carrier Jul 7 06:16:42.386548 containerd[1566]: time="2025-07-07T06:16:42.386513874Z" level=info msg="CreateContainer within sandbox \"a183686d8a2656d3fc462ee8988c27986db75d092a95c3eec4a3d29780c0344f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096\"" Jul 7 06:16:42.391853 containerd[1566]: time="2025-07-07T06:16:42.391806439Z" level=info msg="StartContainer for \"370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096\"" Jul 7 06:16:42.394466 containerd[1566]: time="2025-07-07T06:16:42.393474283Z" level=info msg="connecting to shim 370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096" address="unix:///run/containerd/s/afc15496e1b0c95852ffe5165bafca4212c5a09e51353d7e97ebb38cd9cb5038" protocol=ttrpc version=3 Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.315 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--58qzk-eth0 goldmane-58fd7646b9- calico-system 7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0 828 0 2025-07-07 06:16:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-58qzk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid5c7ece2a65 [] [] }} ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.315 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.340 [INFO][4605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" HandleID="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Workload="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.340 [INFO][4605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" HandleID="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Workload="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-58qzk", "timestamp":"2025-07-07 06:16:42.340635099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.340 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.340 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.340 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.346 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.350 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.354 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.356 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.358 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.358 [INFO][4605] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.360 [INFO][4605] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73 Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.369 [INFO][4605] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.376 [INFO][4605] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.376 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" host="localhost" Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.377 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:42.398613 containerd[1566]: 2025-07-07 06:16:42.377 [INFO][4605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" HandleID="k8s-pod-network.bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Workload="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.381 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--58qzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-58qzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5c7ece2a65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.381 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.381 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5c7ece2a65 ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.385 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.385 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--58qzk-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73", Pod:"goldmane-58fd7646b9-58qzk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5c7ece2a65", MAC:"62:dd:47:6b:e0:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:42.399281 containerd[1566]: 2025-07-07 06:16:42.395 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" Namespace="calico-system" Pod="goldmane-58fd7646b9-58qzk" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--58qzk-eth0" Jul 7 06:16:42.424491 systemd[1]: Started cri-containerd-370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096.scope - libcontainer container 370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096. Jul 7 06:16:42.795782 containerd[1566]: time="2025-07-07T06:16:42.795735759Z" level=info msg="StartContainer for \"370fed8ae24ed1e167cba5882bed5b0e7c03d4b516af5cfe41378283297aa096\" returns successfully" Jul 7 06:16:42.825207 containerd[1566]: time="2025-07-07T06:16:42.825152227Z" level=info msg="connecting to shim bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73" address="unix:///run/containerd/s/9c9a1987816b13abe91551df81abdf53ad366955aba85360f233338f1466f380" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:42.874757 systemd[1]: Started cri-containerd-bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73.scope - libcontainer container bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73. Jul 7 06:16:42.901136 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:43.104106 containerd[1566]: time="2025-07-07T06:16:43.104064659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88nxr,Uid:c10f9320-730f-40aa-b8e2-f257850dcd64,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:43.175678 containerd[1566]: time="2025-07-07T06:16:43.175608098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-58qzk,Uid:7aaa5a8e-e6af-4596-a4f9-90e0e93c84e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73\"" Jul 7 06:16:43.254459 kubelet[2694]: I0707 06:16:43.254287 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c6c4fc68-kmbdk" podStartSLOduration=26.87107786 podStartE2EDuration="29.254266374s" podCreationTimestamp="2025-07-07 06:16:14 +0000 UTC" firstStartedPulling="2025-07-07 06:16:39.926646998 +0000 UTC m=+39.908616668" lastFinishedPulling="2025-07-07 06:16:42.309835502 +0000 UTC m=+42.291805182" observedRunningTime="2025-07-07 06:16:43.253275423 +0000 UTC m=+43.235245103" watchObservedRunningTime="2025-07-07 06:16:43.254266374 +0000 UTC m=+43.236236064" Jul 7 06:16:43.289083 systemd-networkd[1488]: calic0c2257ab39: Link UP Jul 7 06:16:43.290168 systemd-networkd[1488]: calic0c2257ab39: Gained carrier Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.212 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0 coredns-7c65d6cfc9- kube-system c10f9320-730f-40aa-b8e2-f257850dcd64 826 0 2025-07-07 06:16:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-88nxr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0c2257ab39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.213 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.240 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" HandleID="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Workload="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.240 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" HandleID="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Workload="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000522e40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-88nxr", "timestamp":"2025-07-07 06:16:43.240640331 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.240 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.240 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.241 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.250 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.257 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.261 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.263 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.266 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.266 [INFO][4719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.268 [INFO][4719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838 Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.272 [INFO][4719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.279 [INFO][4719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.279 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" host="localhost" Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.279 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:43.302186 containerd[1566]: 2025-07-07 06:16:43.279 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" HandleID="k8s-pod-network.b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Workload="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.284 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c10f9320-730f-40aa-b8e2-f257850dcd64", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-88nxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0c2257ab39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.284 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.284 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0c2257ab39 ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.287 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.288 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c10f9320-730f-40aa-b8e2-f257850dcd64", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838", Pod:"coredns-7c65d6cfc9-88nxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0c2257ab39", MAC:"d6:bc:b0:f0:3a:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:43.302802 containerd[1566]: 2025-07-07 06:16:43.296 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88nxr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--88nxr-eth0" Jul 7 06:16:43.325865 containerd[1566]: time="2025-07-07T06:16:43.325810253Z" level=info msg="connecting to shim b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838" address="unix:///run/containerd/s/998172841ffc9329bfb4553bb338dcfb800eacf4d4d4b44ad1fc6a2216747adf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:43.352493 systemd[1]: Started cri-containerd-b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838.scope - libcontainer container b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838. Jul 7 06:16:43.367176 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:43.401536 containerd[1566]: time="2025-07-07T06:16:43.401457482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88nxr,Uid:c10f9320-730f-40aa-b8e2-f257850dcd64,Namespace:kube-system,Attempt:0,} returns sandbox id \"b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838\"" Jul 7 06:16:43.404973 containerd[1566]: time="2025-07-07T06:16:43.404558869Z" level=info msg="CreateContainer within sandbox \"b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:16:43.418773 containerd[1566]: time="2025-07-07T06:16:43.418465370Z" level=info msg="Container 264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:43.426259 containerd[1566]: time="2025-07-07T06:16:43.426231862Z" level=info msg="CreateContainer within sandbox \"b845a4b6bacb5c7da64fc059013eb4a870ba75838b7c6a47519308068fe2e838\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a\"" Jul 7 06:16:43.426803 containerd[1566]: time="2025-07-07T06:16:43.426784541Z" level=info msg="StartContainer for \"264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a\"" Jul 7 06:16:43.427825 containerd[1566]: time="2025-07-07T06:16:43.427781124Z" level=info msg="connecting to shim 264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a" address="unix:///run/containerd/s/998172841ffc9329bfb4553bb338dcfb800eacf4d4d4b44ad1fc6a2216747adf" protocol=ttrpc version=3 Jul 7 06:16:43.450459 systemd[1]: Started cri-containerd-264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a.scope - libcontainer container 264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a. Jul 7 06:16:43.483603 containerd[1566]: time="2025-07-07T06:16:43.483535014Z" level=info msg="StartContainer for \"264238fc0915a8809b5f7b7ce1157007e8eb4246f50a08f2039b64656439798a\" returns successfully" Jul 7 06:16:44.085480 systemd-networkd[1488]: calid5c7ece2a65: Gained IPv6LL Jul 7 06:16:44.103902 containerd[1566]: time="2025-07-07T06:16:44.103846672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvh2j,Uid:2be6cb3c-5acb-4657-8b32-4bff02f0153a,Namespace:calico-system,Attempt:0,}" Jul 7 06:16:44.216976 systemd-networkd[1488]: cali567ccc98656: Link UP Jul 7 06:16:44.217525 systemd-networkd[1488]: cali567ccc98656: Gained carrier Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.141 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rvh2j-eth0 csi-node-driver- calico-system 2be6cb3c-5acb-4657-8b32-4bff02f0153a 715 0 2025-07-07 06:16:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rvh2j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali567ccc98656 [] [] }} ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.141 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.169 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" HandleID="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Workload="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.169 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" HandleID="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Workload="localhost-k8s-csi--node--driver--rvh2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rvh2j", "timestamp":"2025-07-07 06:16:44.169710967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.171 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.171 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.171 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.179 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.184 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.189 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.191 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.193 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.193 [INFO][4838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.195 [INFO][4838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.202 [INFO][4838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.208 [INFO][4838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.208 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" host="localhost" Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.208 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:16:44.232936 containerd[1566]: 2025-07-07 06:16:44.208 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" HandleID="k8s-pod-network.3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Workload="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.212 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rvh2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2be6cb3c-5acb-4657-8b32-4bff02f0153a", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rvh2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali567ccc98656", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.212 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.212 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali567ccc98656 ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.219 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.219 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rvh2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2be6cb3c-5acb-4657-8b32-4bff02f0153a", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e", Pod:"csi-node-driver-rvh2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali567ccc98656", MAC:"ee:1a:8e:f4:60:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:16:44.235169 containerd[1566]: 2025-07-07 06:16:44.228 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" Namespace="calico-system" Pod="csi-node-driver-rvh2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--rvh2j-eth0" Jul 7 06:16:44.246771 kubelet[2694]: I0707 06:16:44.246734 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:44.390919 kubelet[2694]: I0707 06:16:44.389678 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-88nxr" podStartSLOduration=39.389661688 podStartE2EDuration="39.389661688s" podCreationTimestamp="2025-07-07 06:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:44.37461048 +0000 UTC m=+44.356580160" watchObservedRunningTime="2025-07-07 06:16:44.389661688 +0000 UTC m=+44.371631368" Jul 7 06:16:44.421561 containerd[1566]: time="2025-07-07T06:16:44.421498934Z" level=info msg="connecting to shim 3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e" address="unix:///run/containerd/s/27ec14cad2e9e37d543adcb6fc7d7df5ea589db99e5a947fbe8f3d06a9cb9fa8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:44.495562 systemd[1]: Started cri-containerd-3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e.scope - libcontainer container 3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e. Jul 7 06:16:44.509115 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:16:44.523941 containerd[1566]: time="2025-07-07T06:16:44.523879596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rvh2j,Uid:2be6cb3c-5acb-4657-8b32-4bff02f0153a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e\"" Jul 7 06:16:44.533507 systemd-networkd[1488]: calic0c2257ab39: Gained IPv6LL Jul 7 06:16:44.624685 containerd[1566]: time="2025-07-07T06:16:44.624610939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:44.625480 containerd[1566]: time="2025-07-07T06:16:44.625423164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:16:44.626744 containerd[1566]: time="2025-07-07T06:16:44.626688172Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:44.628893 containerd[1566]: time="2025-07-07T06:16:44.628835605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:44.629346 containerd[1566]: time="2025-07-07T06:16:44.629280582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.318789999s" Jul 7 06:16:44.629346 containerd[1566]: time="2025-07-07T06:16:44.629323563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:16:44.630433 containerd[1566]: time="2025-07-07T06:16:44.630377213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:16:44.640552 containerd[1566]: time="2025-07-07T06:16:44.640165542Z" level=info msg="CreateContainer within sandbox \"d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:16:44.649030 containerd[1566]: time="2025-07-07T06:16:44.648915281Z" level=info msg="Container 3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:44.657545 containerd[1566]: time="2025-07-07T06:16:44.657508016Z" level=info msg="CreateContainer within sandbox \"d70a301b26db60649bd12d8e3a9271ec77eefcea8e12af33b7657b326bad137c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\"" Jul 7 06:16:44.658490 containerd[1566]: time="2025-07-07T06:16:44.658452920Z" level=info msg="StartContainer for \"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\"" Jul 7 06:16:44.659631 containerd[1566]: time="2025-07-07T06:16:44.659597571Z" level=info msg="connecting to shim 3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5" address="unix:///run/containerd/s/eca3daa3fd1d17efc0038619783c7d783f23c56e680ca55f979aaf0eed8e3883" protocol=ttrpc version=3 Jul 7 06:16:44.680650 systemd[1]: Started cri-containerd-3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5.scope - libcontainer container 3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5. Jul 7 06:16:44.735447 containerd[1566]: time="2025-07-07T06:16:44.735275233Z" level=info msg="StartContainer for \"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\" returns successfully" Jul 7 06:16:45.195760 containerd[1566]: time="2025-07-07T06:16:45.195702647Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:45.196553 containerd[1566]: time="2025-07-07T06:16:45.196516525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:16:45.198461 containerd[1566]: time="2025-07-07T06:16:45.198416986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 568.001151ms" Jul 7 06:16:45.198503 containerd[1566]: time="2025-07-07T06:16:45.198459756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:16:45.199364 containerd[1566]: time="2025-07-07T06:16:45.199305565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:16:45.216369 containerd[1566]: time="2025-07-07T06:16:45.216335989Z" level=info msg="CreateContainer within sandbox \"39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:16:45.224331 containerd[1566]: time="2025-07-07T06:16:45.224277388Z" level=info msg="Container b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:45.233531 containerd[1566]: time="2025-07-07T06:16:45.233482732Z" level=info msg="CreateContainer within sandbox \"39335970d94f946b74413ede0430f5a82371dcebecb47821f94881573691667b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98\"" Jul 7 06:16:45.233989 containerd[1566]: time="2025-07-07T06:16:45.233964677Z" level=info msg="StartContainer for \"b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98\"" Jul 7 06:16:45.235023 containerd[1566]: time="2025-07-07T06:16:45.234988269Z" level=info msg="connecting to shim b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98" address="unix:///run/containerd/s/004c4c4c6f133e1147f8645330ab42a27e86cd57aad6883950124ab2b1d32922" protocol=ttrpc version=3 Jul 7 06:16:45.258953 systemd[1]: Started cri-containerd-b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98.scope - libcontainer container b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98. Jul 7 06:16:45.309402 containerd[1566]: time="2025-07-07T06:16:45.309260148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\" id:\"0b0f644cf3680ac8999ca00fee2a67e9260d257dd1842826c0dd3c553c0d5132\" pid:4986 exited_at:{seconds:1751869005 nanos:309000911}" Jul 7 06:16:45.368442 kubelet[2694]: I0707 06:16:45.368350 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f447d75cf-cfxdd" podStartSLOduration=23.752039562 podStartE2EDuration="28.368332994s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="2025-07-07 06:16:40.013888413 +0000 UTC m=+39.995858093" lastFinishedPulling="2025-07-07 06:16:44.630181835 +0000 UTC m=+44.612151525" observedRunningTime="2025-07-07 06:16:45.366930979 +0000 UTC m=+45.348900659" watchObservedRunningTime="2025-07-07 06:16:45.368332994 +0000 UTC m=+45.350302674" Jul 7 06:16:45.669686 containerd[1566]: time="2025-07-07T06:16:45.669621871Z" level=info msg="StartContainer for \"b335c1b218918dca890473ee89a93ad984d27d091f62c362b34c7d518bc1bb98\" returns successfully" Jul 7 06:16:45.839410 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:54196.service - OpenSSH per-connection server daemon (10.0.0.1:54196). Jul 7 06:16:45.897427 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 54196 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:45.898955 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:45.903515 systemd-logind[1538]: New session 9 of user core. Jul 7 06:16:45.910460 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:16:46.046633 sshd[5015]: Connection closed by 10.0.0.1 port 54196 Jul 7 06:16:46.047301 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:46.050627 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:54196.service: Deactivated successfully. Jul 7 06:16:46.052808 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:16:46.055338 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:16:46.056484 systemd-logind[1538]: Removed session 9. Jul 7 06:16:46.069494 systemd-networkd[1488]: cali567ccc98656: Gained IPv6LL Jul 7 06:16:46.302270 kubelet[2694]: I0707 06:16:46.301818 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c6c4fc68-ftmpb" podStartSLOduration=28.012110516 podStartE2EDuration="32.300994362s" podCreationTimestamp="2025-07-07 06:16:14 +0000 UTC" firstStartedPulling="2025-07-07 06:16:40.910256208 +0000 UTC m=+40.892225888" lastFinishedPulling="2025-07-07 06:16:45.199140054 +0000 UTC m=+45.181109734" observedRunningTime="2025-07-07 06:16:46.300308283 +0000 UTC m=+46.282277983" watchObservedRunningTime="2025-07-07 06:16:46.300994362 +0000 UTC m=+46.282964042" Jul 7 06:16:47.269201 kubelet[2694]: I0707 06:16:47.269160 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:48.266746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797855634.mount: Deactivated successfully. Jul 7 06:16:49.272601 containerd[1566]: time="2025-07-07T06:16:49.272551061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:49.273238 containerd[1566]: time="2025-07-07T06:16:49.273208346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:16:49.274376 containerd[1566]: time="2025-07-07T06:16:49.274348036Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:49.276787 containerd[1566]: time="2025-07-07T06:16:49.276755968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:49.277430 containerd[1566]: time="2025-07-07T06:16:49.277386733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.078033419s" Jul 7 06:16:49.277430 containerd[1566]: time="2025-07-07T06:16:49.277414495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:16:49.278226 containerd[1566]: time="2025-07-07T06:16:49.278190963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:16:49.284188 containerd[1566]: time="2025-07-07T06:16:49.284146708Z" level=info msg="CreateContainer within sandbox \"bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:16:49.300417 containerd[1566]: time="2025-07-07T06:16:49.300381177Z" level=info msg="Container 1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:49.308739 containerd[1566]: time="2025-07-07T06:16:49.308703778Z" level=info msg="CreateContainer within sandbox \"bc215d297ad3e8ac606d74d0cac7ff125e1fc5d74b31d0927a0ad11bc91aff73\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\"" Jul 7 06:16:49.309264 containerd[1566]: time="2025-07-07T06:16:49.309202965Z" level=info msg="StartContainer for \"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\"" Jul 7 06:16:49.310476 containerd[1566]: time="2025-07-07T06:16:49.310448063Z" level=info msg="connecting to shim 1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94" address="unix:///run/containerd/s/9c9a1987816b13abe91551df81abdf53ad366955aba85360f233338f1466f380" protocol=ttrpc version=3 Jul 7 06:16:49.365499 systemd[1]: Started cri-containerd-1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94.scope - libcontainer container 1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94. Jul 7 06:16:49.636607 containerd[1566]: time="2025-07-07T06:16:49.636401427Z" level=info msg="StartContainer for \"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" returns successfully" Jul 7 06:16:50.291449 kubelet[2694]: I0707 06:16:50.290819 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-58qzk" podStartSLOduration=27.190474644 podStartE2EDuration="33.290799637s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="2025-07-07 06:16:43.177724414 +0000 UTC m=+43.159694094" lastFinishedPulling="2025-07-07 06:16:49.278049407 +0000 UTC m=+49.260019087" observedRunningTime="2025-07-07 06:16:50.290302053 +0000 UTC m=+50.272271743" watchObservedRunningTime="2025-07-07 06:16:50.290799637 +0000 UTC m=+50.272769327" Jul 7 06:16:50.377547 containerd[1566]: time="2025-07-07T06:16:50.377494763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" id:\"a9f7cca79026d9e42ae93cf8ee9760d9ef813c27ff3bf38fe1fbfe6c5f64ad8b\" pid:5101 exit_status:1 exited_at:{seconds:1751869010 nanos:377096796}" Jul 7 06:16:50.606377 containerd[1566]: time="2025-07-07T06:16:50.606294510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:50.607034 containerd[1566]: time="2025-07-07T06:16:50.606994574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:16:50.608168 containerd[1566]: time="2025-07-07T06:16:50.608132301Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:50.609996 containerd[1566]: time="2025-07-07T06:16:50.609957579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:50.610505 containerd[1566]: time="2025-07-07T06:16:50.610482475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.332262207s" Jul 7 06:16:50.610582 containerd[1566]: time="2025-07-07T06:16:50.610507422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:16:50.612333 containerd[1566]: time="2025-07-07T06:16:50.612283126Z" level=info msg="CreateContainer within sandbox \"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:16:50.625837 containerd[1566]: time="2025-07-07T06:16:50.624614704Z" level=info msg="Container 7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:50.651103 containerd[1566]: time="2025-07-07T06:16:50.651052772Z" level=info msg="CreateContainer within sandbox \"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5\"" Jul 7 06:16:50.651807 containerd[1566]: time="2025-07-07T06:16:50.651563441Z" level=info msg="StartContainer for \"7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5\"" Jul 7 06:16:50.653182 containerd[1566]: time="2025-07-07T06:16:50.653150111Z" level=info msg="connecting to shim 7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5" address="unix:///run/containerd/s/27ec14cad2e9e37d543adcb6fc7d7df5ea589db99e5a947fbe8f3d06a9cb9fa8" protocol=ttrpc version=3 Jul 7 06:16:50.675458 systemd[1]: Started cri-containerd-7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5.scope - libcontainer container 7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5. Jul 7 06:16:50.714491 containerd[1566]: time="2025-07-07T06:16:50.714446841Z" level=info msg="StartContainer for \"7878474fc759642c035dc0ac3427dace1e711d8cfd65cf213ec3646aca7b55d5\" returns successfully" Jul 7 06:16:50.716062 containerd[1566]: time="2025-07-07T06:16:50.715493507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:16:51.058902 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Jul 7 06:16:51.195225 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:51.196919 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:51.201433 systemd-logind[1538]: New session 10 of user core. Jul 7 06:16:51.212569 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:16:51.351951 sshd[5154]: Connection closed by 10.0.0.1 port 45034 Jul 7 06:16:51.352516 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:51.356974 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:16:51.357661 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:45034.service: Deactivated successfully. Jul 7 06:16:51.359803 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:16:51.362143 systemd-logind[1538]: Removed session 10. Jul 7 06:16:51.380399 containerd[1566]: time="2025-07-07T06:16:51.380346772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" id:\"2fd664e73111a3b1a9df14873b50f77ab302c8bad8217dd3a433fd2d71c27b61\" pid:5177 exit_status:1 exited_at:{seconds:1751869011 nanos:380061086}" Jul 7 06:16:53.281389 containerd[1566]: time="2025-07-07T06:16:53.281301642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:53.282866 containerd[1566]: time="2025-07-07T06:16:53.282838357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:16:53.284090 containerd[1566]: time="2025-07-07T06:16:53.284065290Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:53.286649 containerd[1566]: time="2025-07-07T06:16:53.286614096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:53.287055 containerd[1566]: time="2025-07-07T06:16:53.287026080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.571502527s" Jul 7 06:16:53.287097 containerd[1566]: time="2025-07-07T06:16:53.287058170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:16:53.288961 containerd[1566]: time="2025-07-07T06:16:53.288934974Z" level=info msg="CreateContainer within sandbox \"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:16:53.296724 containerd[1566]: time="2025-07-07T06:16:53.296675587Z" level=info msg="Container 9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:53.307241 containerd[1566]: time="2025-07-07T06:16:53.307197754Z" level=info msg="CreateContainer within sandbox \"3356c8d9aa652820c28a4a8386ab112cbc4da6df7a23a5bb107bf5d2546def7e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734\"" Jul 7 06:16:53.307692 containerd[1566]: time="2025-07-07T06:16:53.307652788Z" level=info msg="StartContainer for \"9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734\"" Jul 7 06:16:53.309233 containerd[1566]: time="2025-07-07T06:16:53.309205513Z" level=info msg="connecting to shim 9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734" address="unix:///run/containerd/s/27ec14cad2e9e37d543adcb6fc7d7df5ea589db99e5a947fbe8f3d06a9cb9fa8" protocol=ttrpc version=3 Jul 7 06:16:53.332495 systemd[1]: Started cri-containerd-9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734.scope - libcontainer container 9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734. Jul 7 06:16:53.387265 containerd[1566]: time="2025-07-07T06:16:53.387211407Z" level=info msg="StartContainer for \"9b928bf500b4b8bd85a2319435477eb8467bbdac4268197fce0d876043068734\" returns successfully" Jul 7 06:16:54.178759 kubelet[2694]: I0707 06:16:54.178717 2694 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:16:54.178759 kubelet[2694]: I0707 06:16:54.178751 2694 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:16:54.303378 kubelet[2694]: I0707 06:16:54.303127 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rvh2j" podStartSLOduration=28.54033748 podStartE2EDuration="37.303112586s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="2025-07-07 06:16:44.525013025 +0000 UTC m=+44.506982695" lastFinishedPulling="2025-07-07 06:16:53.287788121 +0000 UTC m=+53.269757801" observedRunningTime="2025-07-07 06:16:54.302516518 +0000 UTC m=+54.284486198" watchObservedRunningTime="2025-07-07 06:16:54.303112586 +0000 UTC m=+54.285082256" Jul 7 06:16:56.364285 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:36632.service - OpenSSH per-connection server daemon (10.0.0.1:36632). Jul 7 06:16:56.394424 kubelet[2694]: I0707 06:16:56.394390 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:56.428705 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 36632 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:56.430377 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:56.435128 systemd-logind[1538]: New session 11 of user core. Jul 7 06:16:56.442435 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:16:56.634435 sshd[5240]: Connection closed by 10.0.0.1 port 36632 Jul 7 06:16:56.633860 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:56.646403 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:36632.service: Deactivated successfully. Jul 7 06:16:56.648737 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:16:56.650897 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:16:56.654530 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:36638.service - OpenSSH per-connection server daemon (10.0.0.1:36638). Jul 7 06:16:56.656494 systemd-logind[1538]: Removed session 11. Jul 7 06:16:56.702833 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 36638 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:56.704163 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:56.708691 systemd-logind[1538]: New session 12 of user core. Jul 7 06:16:56.715437 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:16:56.861122 sshd[5261]: Connection closed by 10.0.0.1 port 36638 Jul 7 06:16:56.862551 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:56.877383 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:36638.service: Deactivated successfully. Jul 7 06:16:56.880357 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:16:56.881297 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:16:56.884775 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:36646.service - OpenSSH per-connection server daemon (10.0.0.1:36646). Jul 7 06:16:56.885447 systemd-logind[1538]: Removed session 12. Jul 7 06:16:56.946540 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 36646 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:16:56.948493 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:56.952721 systemd-logind[1538]: New session 13 of user core. Jul 7 06:16:56.960441 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:16:57.071110 sshd[5274]: Connection closed by 10.0.0.1 port 36646 Jul 7 06:16:57.071480 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:57.075739 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:36646.service: Deactivated successfully. Jul 7 06:16:57.077670 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:16:57.078704 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:16:57.079796 systemd-logind[1538]: Removed session 13. Jul 7 06:17:02.090613 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Jul 7 06:17:02.151011 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:02.152267 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:02.156624 systemd-logind[1538]: New session 14 of user core. Jul 7 06:17:02.162448 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:17:02.293635 sshd[5295]: Connection closed by 10.0.0.1 port 36648 Jul 7 06:17:02.294024 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:02.298307 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:36648.service: Deactivated successfully. Jul 7 06:17:02.300456 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:17:02.301283 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:17:02.302628 systemd-logind[1538]: Removed session 14. Jul 7 06:17:03.584527 containerd[1566]: time="2025-07-07T06:17:03.584477802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\" id:\"f182778b513efc03ddbd6796a6c8becfc8318ebd7cb96763c991bbda64e4ff85\" pid:5319 exited_at:{seconds:1751869023 nanos:584180943}" Jul 7 06:17:03.726634 containerd[1566]: time="2025-07-07T06:17:03.726582927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\" id:\"0602105cfe2c47d2e98a16bb93a04d7a720283059e66a4fedbd4419a3cddbe63\" pid:5341 exit_status:1 exited_at:{seconds:1751869023 nanos:726231665}" Jul 7 06:17:06.583689 containerd[1566]: time="2025-07-07T06:17:06.583628729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" id:\"befb335bc04f34d16213351e1aa88d78db741081b3bcac3e1c90b3c592918a9f\" pid:5370 exited_at:{seconds:1751869026 nanos:583291385}" Jul 7 06:17:07.204331 containerd[1566]: time="2025-07-07T06:17:07.204276153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" id:\"a9e5472ea335f1e21f3eb2b30c32be5ad8128c1b513b2ee15f9af7c503fad625\" pid:5396 exited_at:{seconds:1751869027 nanos:203929802}" Jul 7 06:17:07.310277 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:60294.service - OpenSSH per-connection server daemon (10.0.0.1:60294). Jul 7 06:17:07.371462 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:07.372869 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:07.378698 systemd-logind[1538]: New session 15 of user core. Jul 7 06:17:07.387452 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:17:07.510680 sshd[5411]: Connection closed by 10.0.0.1 port 60294 Jul 7 06:17:07.510894 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:07.515899 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:60294.service: Deactivated successfully. Jul 7 06:17:07.518221 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:17:07.519006 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:17:07.520419 systemd-logind[1538]: Removed session 15. Jul 7 06:17:09.850717 kubelet[2694]: I0707 06:17:09.850651 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:17:12.528182 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:60296.service - OpenSSH per-connection server daemon (10.0.0.1:60296). Jul 7 06:17:12.578628 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 60296 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:12.580340 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:12.584593 systemd-logind[1538]: New session 16 of user core. Jul 7 06:17:12.594461 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:17:12.710182 sshd[5431]: Connection closed by 10.0.0.1 port 60296 Jul 7 06:17:12.710529 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:12.716399 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:60296.service: Deactivated successfully. Jul 7 06:17:12.719006 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:17:12.719922 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:17:12.722081 systemd-logind[1538]: Removed session 16. Jul 7 06:17:15.103191 kubelet[2694]: E0707 06:17:15.103148 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:17:17.727591 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:35952.service - OpenSSH per-connection server daemon (10.0.0.1:35952). Jul 7 06:17:17.793514 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 35952 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:17.795570 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:17.801181 systemd-logind[1538]: New session 17 of user core. Jul 7 06:17:17.806448 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:17:17.988549 sshd[5454]: Connection closed by 10.0.0.1 port 35952 Jul 7 06:17:17.989209 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:17.998529 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:35952.service: Deactivated successfully. Jul 7 06:17:18.000453 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:17:18.002245 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:17:18.006411 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:35966.service - OpenSSH per-connection server daemon (10.0.0.1:35966). Jul 7 06:17:18.008127 systemd-logind[1538]: Removed session 17. Jul 7 06:17:18.056745 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 35966 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:18.058518 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:18.065717 systemd-logind[1538]: New session 18 of user core. Jul 7 06:17:18.074500 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:17:18.274610 sshd[5469]: Connection closed by 10.0.0.1 port 35966 Jul 7 06:17:18.274955 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:18.287937 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:35966.service: Deactivated successfully. Jul 7 06:17:18.293050 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:17:18.295510 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:17:18.305967 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Jul 7 06:17:18.315556 systemd-logind[1538]: Removed session 18. Jul 7 06:17:18.379533 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:18.381215 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:18.386616 systemd-logind[1538]: New session 19 of user core. Jul 7 06:17:18.393576 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:17:20.104179 kubelet[2694]: E0707 06:17:20.104123 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:17:20.183883 sshd[5484]: Connection closed by 10.0.0.1 port 35972 Jul 7 06:17:20.185340 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:20.196275 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:35972.service: Deactivated successfully. Jul 7 06:17:20.198078 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:17:20.198353 systemd[1]: session-19.scope: Consumed 638ms CPU time, 76M memory peak. Jul 7 06:17:20.199862 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:17:20.202231 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:35976.service - OpenSSH per-connection server daemon (10.0.0.1:35976). Jul 7 06:17:20.203509 systemd-logind[1538]: Removed session 19. Jul 7 06:17:20.256267 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 35976 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:20.257845 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:20.262398 systemd-logind[1538]: New session 20 of user core. Jul 7 06:17:20.270454 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:17:20.585800 sshd[5506]: Connection closed by 10.0.0.1 port 35976 Jul 7 06:17:20.586595 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:20.596122 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:35976.service: Deactivated successfully. Jul 7 06:17:20.598203 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:17:20.599475 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:17:20.603989 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). Jul 7 06:17:20.604894 systemd-logind[1538]: Removed session 20. Jul 7 06:17:20.673567 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:20.676118 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:20.681792 systemd-logind[1538]: New session 21 of user core. Jul 7 06:17:20.696434 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:17:20.808048 sshd[5521]: Connection closed by 10.0.0.1 port 35988 Jul 7 06:17:20.808381 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:20.812868 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:35988.service: Deactivated successfully. Jul 7 06:17:20.815134 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:17:20.815957 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:17:20.817068 systemd-logind[1538]: Removed session 21. Jul 7 06:17:25.819262 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Jul 7 06:17:25.865304 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:25.866769 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:25.870992 systemd-logind[1538]: New session 22 of user core. Jul 7 06:17:25.882433 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:17:25.989563 sshd[5541]: Connection closed by 10.0.0.1 port 35998 Jul 7 06:17:25.989851 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:25.994735 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:35998.service: Deactivated successfully. Jul 7 06:17:25.997057 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:17:25.997943 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:17:25.999416 systemd-logind[1538]: Removed session 22. Jul 7 06:17:31.002555 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Jul 7 06:17:31.042283 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:31.043523 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:31.047725 systemd-logind[1538]: New session 23 of user core. Jul 7 06:17:31.055455 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:17:31.162388 sshd[5557]: Connection closed by 10.0.0.1 port 53764 Jul 7 06:17:31.162721 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:31.166965 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:53764.service: Deactivated successfully. Jul 7 06:17:31.168984 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:17:31.169733 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:17:31.170792 systemd-logind[1538]: Removed session 23. Jul 7 06:17:31.281296 containerd[1566]: time="2025-07-07T06:17:31.281194605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\" id:\"f8ad15b6111fc1b378c510cbe3a65cd456db423d1ad6827c79d80f1641b469a8\" pid:5580 exited_at:{seconds:1751869051 nanos:281027086}" Jul 7 06:17:33.586334 containerd[1566]: time="2025-07-07T06:17:33.586272213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bfcaf497b5052e2815c831f17b632f6584c54224cd2aac1dc9bd532c1487ae5\" id:\"916ea722daf637c38a91aa2594f7d022fe271d15bd3c3b5ebcd196c01a905194\" pid:5602 exited_at:{seconds:1751869053 nanos:585994904}" Jul 7 06:17:33.724744 containerd[1566]: time="2025-07-07T06:17:33.724680626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1bc9a1e174c09804ca2465aaf15de05d6d83322d055fbf2ad9b5b3ead984f1\" id:\"b6738eb4ca803a5332503c1be371980649df1d05b853fada94bea693be10ae4e\" pid:5623 exited_at:{seconds:1751869053 nanos:724335478}" Jul 7 06:17:36.175355 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). Jul 7 06:17:36.226501 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:36.227814 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:36.231749 systemd-logind[1538]: New session 24 of user core. Jul 7 06:17:36.239428 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:17:36.346270 sshd[5641]: Connection closed by 10.0.0.1 port 35402 Jul 7 06:17:36.346624 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:36.351673 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:35402.service: Deactivated successfully. Jul 7 06:17:36.354296 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:17:36.355142 systemd-logind[1538]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:17:36.356594 systemd-logind[1538]: Removed session 24. Jul 7 06:17:36.590944 containerd[1566]: time="2025-07-07T06:17:36.590878530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1471e42a2edc3afc6ae282972185e8de463b72f5f56921741b91bc5a9d857a94\" id:\"d837d1c81a691df508144862f5b0847588ffe8e3832782654833e4c8baaa01ed\" pid:5665 exited_at:{seconds:1751869056 nanos:590443442}" Jul 7 06:17:41.370861 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:35412.service - OpenSSH per-connection server daemon (10.0.0.1:35412). Jul 7 06:17:41.431630 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 35412 ssh2: RSA SHA256:lM1+QfY1+TxW9jK1A/TPIM6/Ft6LQX1Zpr4Dn3u4l9M Jul 7 06:17:41.433164 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:41.437691 systemd-logind[1538]: New session 25 of user core. Jul 7 06:17:41.449466 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:17:41.568227 sshd[5679]: Connection closed by 10.0.0.1 port 35412 Jul 7 06:17:41.568558 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:41.572874 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:35412.service: Deactivated successfully. Jul 7 06:17:41.574930 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:17:41.575969 systemd-logind[1538]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:17:41.577613 systemd-logind[1538]: Removed session 25. Jul 7 06:17:42.103979 kubelet[2694]: E0707 06:17:42.103939 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"