Jul 15 23:59:58.865980 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:59:58.866002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:59:58.866011 kernel: BIOS-provided physical RAM map: Jul 15 23:59:58.866018 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 23:59:58.866024 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 23:59:58.866031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 23:59:58.866038 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 23:59:58.866047 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 23:59:58.866054 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 23:59:58.866069 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 23:59:58.866075 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 23:59:58.866082 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 23:59:58.866088 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 23:59:58.866095 kernel: NX (Execute Disable) protection: active Jul 15 23:59:58.866105 kernel: APIC: Static calls initialized Jul 15 23:59:58.866112 kernel: SMBIOS 2.8 present. Jul 15 23:59:58.866120 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 23:59:58.866127 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:59:58.866134 kernel: Hypervisor detected: KVM Jul 15 23:59:58.866141 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:59:58.866148 kernel: kvm-clock: using sched offset of 3403881804 cycles Jul 15 23:59:58.866155 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:59:58.866163 kernel: tsc: Detected 2794.750 MHz processor Jul 15 23:59:58.866170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:59:58.866180 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:59:58.866187 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 23:59:58.866195 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 23:59:58.866202 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:59:58.866209 kernel: Using GB pages for direct mapping Jul 15 23:59:58.866216 kernel: ACPI: Early table checksum verification disabled Jul 15 23:59:58.866224 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 23:59:58.866231 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866241 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866248 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866255 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 23:59:58.866262 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866269 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866277 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866284 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:59:58.866291 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 23:59:58.866304 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 23:59:58.866311 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 23:59:58.866318 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 23:59:58.866326 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 23:59:58.866333 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 23:59:58.866341 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 23:59:58.866350 kernel: No NUMA configuration found Jul 15 23:59:58.866357 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 23:59:58.866365 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 23:59:58.866387 kernel: Zone ranges: Jul 15 23:59:58.866395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:59:58.866402 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 23:59:58.866410 kernel: Normal empty Jul 15 23:59:58.866417 kernel: Device empty Jul 15 23:59:58.866424 kernel: Movable zone start for each node Jul 15 23:59:58.866432 kernel: Early memory node ranges Jul 15 23:59:58.866442 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 23:59:58.866449 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 23:59:58.866456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 23:59:58.866464 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:59:58.866471 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 23:59:58.866478 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 23:59:58.866486 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 23:59:58.866493 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:59:58.866501 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:59:58.866510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 23:59:58.866518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:59:58.866525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:59:58.866533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:59:58.866540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:59:58.866548 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:59:58.866555 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 23:59:58.866562 kernel: TSC deadline timer available Jul 15 23:59:58.866570 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:59:58.866579 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:59:58.866586 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:59:58.866593 kernel: CPU topo: Max. threads per core: 1 Jul 15 23:59:58.866601 kernel: CPU topo: Num. cores per package: 4 Jul 15 23:59:58.866608 kernel: CPU topo: Num. threads per package: 4 Jul 15 23:59:58.866615 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 23:59:58.866623 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 23:59:58.866630 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 23:59:58.866637 kernel: kvm-guest: setup PV sched yield Jul 15 23:59:58.866645 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 23:59:58.866654 kernel: Booting paravirtualized kernel on KVM Jul 15 23:59:58.866662 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:59:58.866669 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 23:59:58.866677 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 23:59:58.866684 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 23:59:58.866692 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 23:59:58.866699 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:59:58.866706 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:59:58.866715 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:59:58.866725 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:59:58.866732 kernel: random: crng init done Jul 15 23:59:58.866740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:59:58.866747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:59:58.866755 kernel: Fallback order for Node 0: 0 Jul 15 23:59:58.866762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 23:59:58.866769 kernel: Policy zone: DMA32 Jul 15 23:59:58.866777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:59:58.866786 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:59:58.866794 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:59:58.866801 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:59:58.866809 kernel: Dynamic Preempt: voluntary Jul 15 23:59:58.866816 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:59:58.866824 kernel: rcu: RCU event tracing is enabled. Jul 15 23:59:58.866831 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:59:58.866839 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:59:58.866847 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:59:58.866856 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:59:58.866864 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:59:58.866871 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:59:58.866879 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:59:58.866886 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:59:58.866894 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:59:58.866901 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 23:59:58.866909 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:59:58.866925 kernel: Console: colour VGA+ 80x25 Jul 15 23:59:58.866933 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:59:58.866941 kernel: ACPI: Core revision 20240827 Jul 15 23:59:58.866948 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 23:59:58.866958 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:59:58.866966 kernel: x2apic enabled Jul 15 23:59:58.866974 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:59:58.866981 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 23:59:58.866990 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 23:59:58.866999 kernel: kvm-guest: setup PV IPIs Jul 15 23:59:58.867007 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 23:59:58.867015 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:59:58.867023 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 23:59:58.867031 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 23:59:58.867038 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 23:59:58.867046 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 23:59:58.867054 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:59:58.867069 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 23:59:58.867079 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:59:58.867087 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 23:59:58.867095 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 23:59:58.867102 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:59:58.867110 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:59:58.867118 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 23:59:58.867127 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 23:59:58.867134 kernel: x86/bugs: return thunk changed Jul 15 23:59:58.867145 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 23:59:58.867152 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:59:58.867160 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:59:58.867168 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:59:58.867176 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:59:58.867184 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 23:59:58.867191 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:59:58.867199 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:59:58.867207 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:59:58.867217 kernel: landlock: Up and running. Jul 15 23:59:58.867224 kernel: SELinux: Initializing. Jul 15 23:59:58.867232 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:59:58.867240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:59:58.867248 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 23:59:58.867256 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 23:59:58.867263 kernel: ... version: 0 Jul 15 23:59:58.867271 kernel: ... bit width: 48 Jul 15 23:59:58.867279 kernel: ... generic registers: 6 Jul 15 23:59:58.867288 kernel: ... value mask: 0000ffffffffffff Jul 15 23:59:58.867296 kernel: ... max period: 00007fffffffffff Jul 15 23:59:58.867304 kernel: ... fixed-purpose events: 0 Jul 15 23:59:58.867311 kernel: ... event mask: 000000000000003f Jul 15 23:59:58.867319 kernel: signal: max sigframe size: 1776 Jul 15 23:59:58.867327 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:59:58.867334 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:59:58.867342 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:59:58.867350 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:59:58.867360 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:59:58.867367 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 23:59:58.867389 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:59:58.867404 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 23:59:58.867418 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jul 15 23:59:58.867427 kernel: devtmpfs: initialized Jul 15 23:59:58.867438 kernel: x86/mm: Memory block size: 128MB Jul 15 23:59:58.867453 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:59:58.867463 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:59:58.867478 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:59:58.867487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:59:58.867497 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:59:58.867508 kernel: audit: type=2000 audit(1752623995.671:1): state=initialized audit_enabled=0 res=1 Jul 15 23:59:58.867518 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:59:58.867528 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:59:58.867538 kernel: cpuidle: using governor menu Jul 15 23:59:58.867545 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:59:58.867553 kernel: dca service started, version 1.12.1 Jul 15 23:59:58.867565 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 23:59:58.867576 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 23:59:58.867587 kernel: PCI: Using configuration type 1 for base access Jul 15 23:59:58.867597 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:59:58.867607 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:59:58.867618 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:59:58.867628 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:59:58.867638 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:59:58.867646 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:59:58.867659 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:59:58.867669 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:59:58.867680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:59:58.867690 kernel: ACPI: Interpreter enabled Jul 15 23:59:58.867700 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:59:58.867710 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:59:58.867721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:59:58.867731 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 23:59:58.867742 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 23:59:58.867755 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:59:58.867964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:59:58.868122 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 23:59:58.868263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 23:59:58.868278 kernel: PCI host bridge to bus 0000:00 Jul 15 23:59:58.868442 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:59:58.868576 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:59:58.868740 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:59:58.868898 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 23:59:58.869038 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 23:59:58.869183 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 23:59:58.869335 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:59:58.869588 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:59:58.869759 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 23:59:58.869898 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 23:59:58.870045 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 23:59:58.870205 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 23:59:58.870348 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 23:59:58.870515 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:59:58.870633 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 23:59:58.870752 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 23:59:58.870892 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 23:59:58.871029 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 23:59:58.871233 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 23:59:58.871421 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 23:59:58.871548 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 23:59:58.871683 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:59:58.871807 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 23:59:58.871930 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 23:59:58.872046 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 23:59:58.872193 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 23:59:58.872348 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:59:58.872507 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 23:59:58.872638 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 23:59:58.872754 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 23:59:58.872894 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 23:59:58.873121 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 23:59:58.873251 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 23:59:58.873262 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:59:58.873270 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:59:58.873282 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:59:58.873290 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:59:58.873298 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 23:59:58.873306 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 23:59:58.873313 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 23:59:58.873321 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 23:59:58.873329 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 23:59:58.873337 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 23:59:58.873345 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 23:59:58.873354 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 23:59:58.873362 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 23:59:58.873386 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 23:59:58.873394 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 23:59:58.873402 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 23:59:58.873410 kernel: iommu: Default domain type: Translated Jul 15 23:59:58.873418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:59:58.873426 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:59:58.873434 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:59:58.873442 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 23:59:58.873453 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 23:59:58.873590 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 23:59:58.873754 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 23:59:58.873903 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 23:59:58.873918 kernel: vgaarb: loaded Jul 15 23:59:58.873929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 23:59:58.873939 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 23:59:58.873949 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:59:58.873963 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:59:58.873974 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:59:58.873984 kernel: pnp: PnP ACPI init Jul 15 23:59:58.875424 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 23:59:58.875446 kernel: pnp: PnP ACPI: found 6 devices Jul 15 23:59:58.875458 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:59:58.875469 kernel: NET: Registered PF_INET protocol family Jul 15 23:59:58.875479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:59:58.875495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:59:58.875505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:59:58.875516 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:59:58.875526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:59:58.875537 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:59:58.875546 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:59:58.875556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:59:58.875566 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:59:58.875577 kernel: NET: Registered PF_XDP protocol family Jul 15 23:59:58.875726 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:59:58.875858 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:59:58.875992 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:59:58.876137 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 23:59:58.876267 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 23:59:58.876430 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 23:59:58.876445 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:59:58.876457 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:59:58.876473 kernel: Initialise system trusted keyrings Jul 15 23:59:58.876483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:59:58.876494 kernel: Key type asymmetric registered Jul 15 23:59:58.876505 kernel: Asymmetric key parser 'x509' registered Jul 15 23:59:58.876515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:59:58.876526 kernel: io scheduler mq-deadline registered Jul 15 23:59:58.876536 kernel: io scheduler kyber registered Jul 15 23:59:58.876547 kernel: io scheduler bfq registered Jul 15 23:59:58.876558 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:59:58.876572 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 23:59:58.876583 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 23:59:58.876594 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 23:59:58.876604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:59:58.876615 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:59:58.876626 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:59:58.876637 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:59:58.876647 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:59:58.876658 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:59:58.876815 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 23:59:58.876951 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 23:59:58.877100 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T23:59:58 UTC (1752623998) Jul 15 23:59:58.877236 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 23:59:58.877251 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 23:59:58.877262 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:59:58.877272 kernel: Segment Routing with IPv6 Jul 15 23:59:58.877283 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:59:58.877297 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:59:58.877308 kernel: Key type dns_resolver registered Jul 15 23:59:58.877319 kernel: IPI shorthand broadcast: enabled Jul 15 23:59:58.877329 kernel: sched_clock: Marking stable (2964002692, 132177714)->(3130412625, -34232219) Jul 15 23:59:58.877339 kernel: registered taskstats version 1 Jul 15 23:59:58.877350 kernel: Loading compiled-in X.509 certificates Jul 15 23:59:58.877361 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:59:58.877391 kernel: Demotion targets for Node 0: null Jul 15 23:59:58.877403 kernel: Key type .fscrypt registered Jul 15 23:59:58.877416 kernel: Key type fscrypt-provisioning registered Jul 15 23:59:58.877427 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:59:58.877437 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:59:58.877448 kernel: ima: No architecture policies found Jul 15 23:59:58.877458 kernel: clk: Disabling unused clocks Jul 15 23:59:58.877469 kernel: Warning: unable to open an initial console. Jul 15 23:59:58.877480 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:59:58.877491 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:59:58.877501 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:59:58.877514 kernel: Run /init as init process Jul 15 23:59:58.877524 kernel: with arguments: Jul 15 23:59:58.877535 kernel: /init Jul 15 23:59:58.877545 kernel: with environment: Jul 15 23:59:58.877555 kernel: HOME=/ Jul 15 23:59:58.877566 kernel: TERM=linux Jul 15 23:59:58.877577 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:59:58.877588 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:59:58.877606 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:59:58.877632 systemd[1]: Detected virtualization kvm. Jul 15 23:59:58.877644 systemd[1]: Detected architecture x86-64. Jul 15 23:59:58.877655 systemd[1]: Running in initrd. Jul 15 23:59:58.877667 systemd[1]: No hostname configured, using default hostname. Jul 15 23:59:58.877681 systemd[1]: Hostname set to . Jul 15 23:59:58.877692 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:59:58.877703 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:59:58.877715 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:59:58.877727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:59:58.877739 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:59:58.877751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:59:58.877763 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:59:58.877778 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:59:58.877791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:59:58.877803 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:59:58.877815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:59:58.877829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:59:58.877840 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:59:58.877852 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:59:58.877863 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:59:58.877876 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:59:58.877888 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:59:58.877899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:59:58.877911 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:59:58.877923 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:59:58.877934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:59:58.877946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:59:58.877957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:59:58.877971 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:59:58.877983 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:59:58.877994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:59:58.878006 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:59:58.878018 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:59:58.878035 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:59:58.878047 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:59:58.878068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:59:58.878080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:59:58.878092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:59:58.878104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:59:58.878119 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:59:58.878131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:59:58.878169 systemd-journald[219]: Collecting audit messages is disabled. Jul 15 23:59:58.878201 systemd-journald[219]: Journal started Jul 15 23:59:58.878227 systemd-journald[219]: Runtime Journal (/run/log/journal/07bd37d82bc24d10bf76528501cc10b0) is 6M, max 48.6M, 42.5M free. Jul 15 23:59:58.871391 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 23:59:58.911239 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:59:58.911268 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:59:58.911299 kernel: Bridge firewalling registered Jul 15 23:59:58.901164 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 23:59:58.912642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:59:58.915545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:59:58.918336 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:59:58.925536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:59:58.941335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:59:58.947128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:59:58.952763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:59:58.963590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:59:58.964040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:59:58.964632 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:59:58.969534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:59:58.971844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:59:58.989451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:59:58.991122 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:59:59.027514 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:59:59.030516 systemd-resolved[255]: Positive Trust Anchors: Jul 15 23:59:59.030525 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:59:59.030555 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:59:59.033241 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 15 23:59:59.034413 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:59:59.035829 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:59:59.141415 kernel: SCSI subsystem initialized Jul 15 23:59:59.151414 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:59:59.161409 kernel: iscsi: registered transport (tcp) Jul 15 23:59:59.183413 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:59:59.183484 kernel: QLogic iSCSI HBA Driver Jul 15 23:59:59.203490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:59:59.220936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:59:59.224833 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:59:59.271751 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:59:59.275086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:59:59.325418 kernel: raid6: avx2x4 gen() 22508 MB/s Jul 15 23:59:59.342411 kernel: raid6: avx2x2 gen() 25296 MB/s Jul 15 23:59:59.359508 kernel: raid6: avx2x1 gen() 23808 MB/s Jul 15 23:59:59.359553 kernel: raid6: using algorithm avx2x2 gen() 25296 MB/s Jul 15 23:59:59.377501 kernel: raid6: .... xor() 19166 MB/s, rmw enabled Jul 15 23:59:59.377530 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:59:59.446415 kernel: xor: automatically using best checksumming function avx Jul 15 23:59:59.616432 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:59:59.626685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:59:59.631012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:59:59.673799 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 15 23:59:59.681270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:59:59.685536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:59:59.728277 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 15 23:59:59.760115 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:59:59.761806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:59:59.847869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:59:59.850337 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:59:59.894408 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 23:59:59.901903 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:59:59.909606 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:59:59.909658 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:59:59.909677 kernel: GPT:9289727 != 19775487 Jul 15 23:59:59.909687 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:59:59.909697 kernel: GPT:9289727 != 19775487 Jul 15 23:59:59.909707 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:59:59.909716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:59:59.913411 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 15 23:59:59.919406 kernel: AES CTR mode by8 optimization enabled Jul 15 23:59:59.927443 kernel: libata version 3.00 loaded. Jul 15 23:59:59.933950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:59:59.934087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:59:59.936857 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:59:59.940151 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:59:59.944277 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:59:59.957401 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 23:59:59.960417 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 23:59:59.964269 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 23:59:59.964566 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 23:59:59.964745 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 23:59:59.971066 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:59:59.980400 kernel: scsi host0: ahci Jul 15 23:59:59.981396 kernel: scsi host1: ahci Jul 15 23:59:59.981642 kernel: scsi host2: ahci Jul 15 23:59:59.983410 kernel: scsi host3: ahci Jul 15 23:59:59.983652 kernel: scsi host4: ahci Jul 15 23:59:59.984449 kernel: scsi host5: ahci Jul 15 23:59:59.984647 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 23:59:59.984663 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 23:59:59.984683 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 23:59:59.984697 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 23:59:59.984710 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 23:59:59.984724 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 23:59:59.990827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 16 00:00:00.014896 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 16 00:00:00.024287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:00:00.036436 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 16 00:00:00.046441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 16 00:00:00.048056 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 16 00:00:00.084083 disk-uuid[631]: Primary Header is updated. Jul 16 00:00:00.084083 disk-uuid[631]: Secondary Entries is updated. Jul 16 00:00:00.084083 disk-uuid[631]: Secondary Header is updated. Jul 16 00:00:00.089478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 00:00:00.095415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 00:00:00.292179 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 16 00:00:00.292265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 16 00:00:00.292282 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 16 00:00:00.293412 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 16 00:00:00.294426 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 16 00:00:00.295419 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 16 00:00:00.296475 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 16 00:00:00.296513 kernel: ata3.00: applying bridge limits Jul 16 00:00:00.297781 kernel: ata3.00: configured for UDMA/100 Jul 16 00:00:00.298412 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 16 00:00:00.360410 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 16 00:00:00.360738 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 16 00:00:00.386406 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 16 00:00:00.667943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 16 00:00:00.669224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:00:00.671454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:00:00.673916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:00:00.675965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 16 00:00:00.697216 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:00:01.095427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 00:00:01.095972 disk-uuid[632]: The operation has completed successfully. Jul 16 00:00:01.126722 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 00:00:01.126838 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 16 00:00:01.166701 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 16 00:00:01.194592 sh[661]: Success Jul 16 00:00:01.212735 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 00:00:01.212825 kernel: device-mapper: uevent: version 1.0.3 Jul 16 00:00:01.212843 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 16 00:00:01.225470 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 16 00:00:01.265877 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 16 00:00:01.268757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 16 00:00:01.286110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 16 00:00:01.295648 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 16 00:00:01.295680 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (673) Jul 16 00:00:01.297099 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 16 00:00:01.297128 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:00:01.298754 kernel: BTRFS info (device dm-0): using free-space-tree Jul 16 00:00:01.304522 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 16 00:00:01.306208 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:00:01.307866 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 16 00:00:01.308920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 16 00:00:01.310811 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 16 00:00:01.340416 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Jul 16 00:00:01.343300 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:00:01.343329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:00:01.343340 kernel: BTRFS info (device vda6): using free-space-tree Jul 16 00:00:01.351407 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:00:01.352911 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 16 00:00:01.354475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 16 00:00:01.467457 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:00:01.475477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:00:01.531120 systemd-networkd[847]: lo: Link UP Jul 16 00:00:01.531130 systemd-networkd[847]: lo: Gained carrier Jul 16 00:00:01.533591 systemd-networkd[847]: Enumeration completed Jul 16 00:00:01.533713 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:00:01.534098 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 16 00:00:01.534102 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:00:01.535279 systemd-networkd[847]: eth0: Link UP Jul 16 00:00:01.535283 systemd-networkd[847]: eth0: Gained carrier Jul 16 00:00:01.535292 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 16 00:00:01.556398 systemd[1]: Reached target network.target - Network. Jul 16 00:00:01.572450 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 16 00:00:01.574133 ignition[751]: Ignition 2.21.0 Jul 16 00:00:01.574145 ignition[751]: Stage: fetch-offline Jul 16 00:00:01.574194 ignition[751]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:01.574203 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:01.574313 ignition[751]: parsed url from cmdline: "" Jul 16 00:00:01.574316 ignition[751]: no config URL provided Jul 16 00:00:01.574322 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 00:00:01.574330 ignition[751]: no config at "/usr/lib/ignition/user.ign" Jul 16 00:00:01.574359 ignition[751]: op(1): [started] loading QEMU firmware config module Jul 16 00:00:01.574364 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 16 00:00:01.586579 ignition[751]: op(1): [finished] loading QEMU firmware config module Jul 16 00:00:01.627250 ignition[751]: parsing config with SHA512: 9035ba81c69a99a4984328db623e39b5b4d7695718a69ccba9f5c47cff3bb12999e242aec76f532bc24f4885660cf12b8f4ba0311583c5bf685c235ac54262e4 Jul 16 00:00:01.638362 unknown[751]: fetched base config from "system" Jul 16 00:00:01.638395 unknown[751]: fetched user config from "qemu" Jul 16 00:00:01.638868 ignition[751]: fetch-offline: fetch-offline passed Jul 16 00:00:01.638970 ignition[751]: Ignition finished successfully Jul 16 00:00:01.642331 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:00:01.644207 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 16 00:00:01.645111 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 16 00:00:01.742439 ignition[855]: Ignition 2.21.0 Jul 16 00:00:01.742455 ignition[855]: Stage: kargs Jul 16 00:00:01.743655 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:01.743675 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:01.744684 ignition[855]: kargs: kargs passed Jul 16 00:00:01.744740 ignition[855]: Ignition finished successfully Jul 16 00:00:01.749526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 16 00:00:01.751897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 16 00:00:01.838530 ignition[863]: Ignition 2.21.0 Jul 16 00:00:01.838543 ignition[863]: Stage: disks Jul 16 00:00:01.838672 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:01.838681 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:01.841622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 16 00:00:01.839411 ignition[863]: disks: disks passed Jul 16 00:00:01.844482 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 16 00:00:01.839459 ignition[863]: Ignition finished successfully Jul 16 00:00:01.846096 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 16 00:00:01.848365 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:00:01.850715 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:00:01.851883 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:00:01.855030 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 16 00:00:01.890869 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 16 00:00:01.909063 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 16 00:00:01.912151 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 16 00:00:02.039424 kernel: EXT4-fs (vda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 16 00:00:02.040478 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 16 00:00:02.042090 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 16 00:00:02.044984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:00:02.046936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 16 00:00:02.048180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 16 00:00:02.048231 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 00:00:02.048258 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:00:02.062659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 16 00:00:02.064770 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 16 00:00:02.070317 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 16 00:00:02.070354 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:00:02.070368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:00:02.071229 kernel: BTRFS info (device vda6): using free-space-tree Jul 16 00:00:02.076114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:00:02.120058 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 00:00:02.126108 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Jul 16 00:00:02.131057 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 00:00:02.135787 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 00:00:02.239675 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 16 00:00:02.242778 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 16 00:00:02.244778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 16 00:00:02.264409 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:00:02.278977 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 16 00:00:02.294660 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 16 00:00:02.308071 ignition[997]: INFO : Ignition 2.21.0 Jul 16 00:00:02.308071 ignition[997]: INFO : Stage: mount Jul 16 00:00:02.309720 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:02.309720 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:02.312301 ignition[997]: INFO : mount: mount passed Jul 16 00:00:02.313048 ignition[997]: INFO : Ignition finished successfully Jul 16 00:00:02.316202 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 16 00:00:02.318228 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 16 00:00:02.347763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:00:02.387139 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 16 00:00:02.387179 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:00:02.387191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:00:02.388596 kernel: BTRFS info (device vda6): using free-space-tree Jul 16 00:00:02.392306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:00:02.435488 ignition[1026]: INFO : Ignition 2.21.0 Jul 16 00:00:02.435488 ignition[1026]: INFO : Stage: files Jul 16 00:00:02.437359 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:02.437359 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:02.442395 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 16 00:00:02.444255 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 00:00:02.444255 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 00:00:02.447321 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 00:00:02.447321 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 00:00:02.450669 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 00:00:02.450669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 00:00:02.450669 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 16 00:00:02.447529 unknown[1026]: wrote ssh authorized keys file for user: core Jul 16 00:00:02.538005 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 16 00:00:02.983805 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 00:00:02.983805 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 16 00:00:02.987757 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 00:00:02.987757 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:00:02.991302 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:00:02.993038 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:00:02.994846 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:00:02.996539 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:00:02.998286 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:00:03.003810 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:00:03.005678 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:00:03.007369 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:00:03.012532 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:00:03.012532 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:00:03.017199 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 16 00:00:03.156626 systemd-networkd[847]: eth0: Gained IPv6LL Jul 16 00:00:03.383761 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 16 00:00:03.848046 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:00:03.848046 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 16 00:00:03.852851 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 16 00:00:03.854933 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 16 00:00:03.878141 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 16 00:00:03.883848 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 16 00:00:03.885403 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 16 00:00:03.885403 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 16 00:00:03.885403 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 00:00:03.890974 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:00:03.890974 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:00:03.890974 ignition[1026]: INFO : files: files passed Jul 16 00:00:03.890974 ignition[1026]: INFO : Ignition finished successfully Jul 16 00:00:03.891820 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 16 00:00:03.895811 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 16 00:00:03.897852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 16 00:00:03.928267 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 00:00:03.928707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 16 00:00:03.931753 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Jul 16 00:00:03.936047 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:00:03.937975 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:00:03.939617 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:00:03.942826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:00:03.945556 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 16 00:00:03.948165 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 16 00:00:04.042714 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 00:00:04.042886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 16 00:00:04.044137 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 16 00:00:04.044863 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 16 00:00:04.045289 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 16 00:00:04.046437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 16 00:00:04.065516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:00:04.069972 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 16 00:00:04.112123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:00:04.112884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:00:04.113357 systemd[1]: Stopped target timers.target - Timer Units. Jul 16 00:00:04.118735 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 00:00:04.118897 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:00:04.122082 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 16 00:00:04.122453 systemd[1]: Stopped target basic.target - Basic System. Jul 16 00:00:04.123014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 16 00:00:04.123416 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:00:04.130766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 16 00:00:04.131331 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:00:04.131912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 16 00:00:04.132308 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:00:04.132850 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 16 00:00:04.133158 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 16 00:00:04.133746 systemd[1]: Stopped target swap.target - Swaps. Jul 16 00:00:04.134122 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 00:00:04.134239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:00:04.149094 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:00:04.149897 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:00:04.150193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 16 00:00:04.154800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:00:04.155512 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 00:00:04.155660 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 16 00:00:04.159439 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 00:00:04.159575 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:00:04.160089 systemd[1]: Stopped target paths.target - Path Units. Jul 16 00:00:04.160358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 00:00:04.169523 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:00:04.173028 systemd[1]: Stopped target slices.target - Slice Units. Jul 16 00:00:04.173390 systemd[1]: Stopped target sockets.target - Socket Units. Jul 16 00:00:04.175270 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 00:00:04.175389 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:00:04.177361 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 00:00:04.177459 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:00:04.177959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 00:00:04.178080 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:00:04.181369 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 00:00:04.181950 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 16 00:00:04.186279 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 16 00:00:04.187735 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 16 00:00:04.190114 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 00:00:04.190277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:00:04.193829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 00:00:04.194018 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:00:04.200891 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 00:00:04.201037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 16 00:00:04.223702 ignition[1082]: INFO : Ignition 2.21.0 Jul 16 00:00:04.223702 ignition[1082]: INFO : Stage: umount Jul 16 00:00:04.234587 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:00:04.234587 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 16 00:00:04.234587 ignition[1082]: INFO : umount: umount passed Jul 16 00:00:04.234587 ignition[1082]: INFO : Ignition finished successfully Jul 16 00:00:04.224199 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 00:00:04.229697 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 00:00:04.229816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 16 00:00:04.233435 systemd[1]: Stopped target network.target - Network. Jul 16 00:00:04.235274 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 00:00:04.235340 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 16 00:00:04.235843 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 00:00:04.236048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 16 00:00:04.239354 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 00:00:04.239432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 16 00:00:04.239836 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 16 00:00:04.239876 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 16 00:00:04.240265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 16 00:00:04.241061 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 16 00:00:04.251244 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 00:00:04.251444 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 16 00:00:04.256097 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 16 00:00:04.256838 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 16 00:00:04.256921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:00:04.262265 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:00:04.269633 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 00:00:04.269771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 16 00:00:04.273428 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 16 00:00:04.273635 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 16 00:00:04.275878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 00:00:04.275935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:00:04.277577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 16 00:00:04.315939 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 00:00:04.316049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:00:04.316390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 00:00:04.316437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:00:04.321402 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 00:00:04.321448 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 16 00:00:04.322024 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:00:04.323345 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 00:00:04.339498 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 00:00:04.339680 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 16 00:00:04.352340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 00:00:04.352590 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:00:04.353399 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 00:00:04.353461 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 16 00:00:04.356711 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 00:00:04.356756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:00:04.358741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 00:00:04.358800 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:00:04.362828 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 00:00:04.362887 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 16 00:00:04.366010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 00:00:04.366070 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:00:04.370189 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 16 00:00:04.370806 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 16 00:00:04.370867 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:00:04.375463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 00:00:04.375522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:00:04.379085 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 16 00:00:04.379146 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:00:04.383255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 00:00:04.383306 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:00:04.383909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:00:04.383961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:00:04.392314 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 00:00:04.392459 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 16 00:00:04.433393 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 00:00:04.433574 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 16 00:00:04.434522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 16 00:00:04.435051 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 00:00:04.435120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 16 00:00:04.436654 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 16 00:00:04.457938 systemd[1]: Switching root. Jul 16 00:00:04.508267 systemd-journald[219]: Journal stopped Jul 16 00:00:06.281278 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 16 00:00:06.281341 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 00:00:06.281358 kernel: SELinux: policy capability open_perms=1 Jul 16 00:00:06.281370 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 00:00:06.281408 kernel: SELinux: policy capability always_check_network=0 Jul 16 00:00:06.281422 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 00:00:06.281440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 00:00:06.281452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 00:00:06.281463 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 00:00:06.281474 kernel: SELinux: policy capability userspace_initial_context=0 Jul 16 00:00:06.281490 kernel: audit: type=1403 audit(1752624005.264:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 16 00:00:06.281510 systemd[1]: Successfully loaded SELinux policy in 53.527ms. Jul 16 00:00:06.281534 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.856ms. Jul 16 00:00:06.281547 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:00:06.281560 systemd[1]: Detected virtualization kvm. Jul 16 00:00:06.281572 systemd[1]: Detected architecture x86-64. Jul 16 00:00:06.281585 systemd[1]: Detected first boot. Jul 16 00:00:06.281598 systemd[1]: Initializing machine ID from VM UUID. Jul 16 00:00:06.281610 zram_generator::config[1127]: No configuration found. Jul 16 00:00:06.281625 kernel: Guest personality initialized and is inactive Jul 16 00:00:06.281638 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 16 00:00:06.281652 kernel: Initialized host personality Jul 16 00:00:06.281665 kernel: NET: Registered PF_VSOCK protocol family Jul 16 00:00:06.281678 systemd[1]: Populated /etc with preset unit settings. Jul 16 00:00:06.281691 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 16 00:00:06.281703 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 16 00:00:06.281715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 16 00:00:06.281727 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 16 00:00:06.281742 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 16 00:00:06.281754 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 16 00:00:06.281766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 16 00:00:06.281778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 16 00:00:06.281791 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 16 00:00:06.281803 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 16 00:00:06.281819 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 16 00:00:06.281831 systemd[1]: Created slice user.slice - User and Session Slice. Jul 16 00:00:06.281851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:00:06.281867 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:00:06.281879 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 16 00:00:06.281891 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 16 00:00:06.281904 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 16 00:00:06.281916 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:00:06.281928 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 16 00:00:06.281942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:00:06.281958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:00:06.281971 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 16 00:00:06.281985 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 16 00:00:06.281998 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 16 00:00:06.282010 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 16 00:00:06.282023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:00:06.282035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:00:06.282047 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:00:06.282059 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:00:06.282074 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 16 00:00:06.282088 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 16 00:00:06.282100 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 16 00:00:06.282112 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:00:06.282125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:00:06.282137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:00:06.282149 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 16 00:00:06.282161 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 16 00:00:06.282173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 16 00:00:06.282185 systemd[1]: Mounting media.mount - External Media Directory... Jul 16 00:00:06.282199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:06.282217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 16 00:00:06.282229 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 16 00:00:06.282241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 16 00:00:06.282253 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 00:00:06.282266 systemd[1]: Reached target machines.target - Containers. Jul 16 00:00:06.282278 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 16 00:00:06.282290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:00:06.282304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:00:06.282316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 16 00:00:06.282328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:00:06.282340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:00:06.282352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:00:06.282364 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 16 00:00:06.282470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:00:06.282483 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 00:00:06.282498 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 16 00:00:06.282510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 16 00:00:06.282522 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 16 00:00:06.282534 systemd[1]: Stopped systemd-fsck-usr.service. Jul 16 00:00:06.282547 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:00:06.282559 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:00:06.282572 kernel: fuse: init (API version 7.41) Jul 16 00:00:06.282583 kernel: loop: module loaded Jul 16 00:00:06.282595 kernel: ACPI: bus type drm_connector registered Jul 16 00:00:06.282609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:00:06.282628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:00:06.282676 systemd-journald[1198]: Collecting audit messages is disabled. Jul 16 00:00:06.282705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 16 00:00:06.282718 systemd-journald[1198]: Journal started Jul 16 00:00:06.282744 systemd-journald[1198]: Runtime Journal (/run/log/journal/07bd37d82bc24d10bf76528501cc10b0) is 6M, max 48.6M, 42.5M free. Jul 16 00:00:05.845237 systemd[1]: Queued start job for default target multi-user.target. Jul 16 00:00:05.868174 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 16 00:00:05.869041 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 16 00:00:06.289408 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 16 00:00:06.293444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:00:06.296441 systemd[1]: verity-setup.service: Deactivated successfully. Jul 16 00:00:06.296520 systemd[1]: Stopped verity-setup.service. Jul 16 00:00:06.299519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:06.304730 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:00:06.306012 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 16 00:00:06.307210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 16 00:00:06.308568 systemd[1]: Mounted media.mount - External Media Directory. Jul 16 00:00:06.309675 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 16 00:00:06.311017 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 16 00:00:06.312325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 16 00:00:06.313721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:00:06.315452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 00:00:06.315716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 16 00:00:06.317289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:00:06.317520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:00:06.319049 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:00:06.319269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:00:06.320715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:00:06.320943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:00:06.322564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 00:00:06.322780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 16 00:00:06.324246 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:00:06.324481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:00:06.325991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:00:06.327570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:00:06.329288 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 16 00:00:06.331037 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 16 00:00:06.349823 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:00:06.353352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 16 00:00:06.356020 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 16 00:00:06.357250 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 00:00:06.357364 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:00:06.359956 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 16 00:00:06.365499 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 16 00:00:06.366755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:00:06.510786 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 16 00:00:06.513513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 16 00:00:06.514724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:00:06.515899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 16 00:00:06.517098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:00:06.524041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:00:06.528504 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 16 00:00:06.533525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:00:06.537275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:00:06.538578 systemd-journald[1198]: Time spent on flushing to /var/log/journal/07bd37d82bc24d10bf76528501cc10b0 is 25.429ms for 976 entries. Jul 16 00:00:06.538578 systemd-journald[1198]: System Journal (/var/log/journal/07bd37d82bc24d10bf76528501cc10b0) is 8M, max 195.6M, 187.6M free. Jul 16 00:00:06.629691 systemd-journald[1198]: Received client request to flush runtime journal. Jul 16 00:00:06.629733 kernel: loop0: detected capacity change from 0 to 113872 Jul 16 00:00:06.629758 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 00:00:06.540067 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 16 00:00:06.541614 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 16 00:00:06.596725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:00:06.603891 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 16 00:00:06.631440 kernel: loop1: detected capacity change from 0 to 221472 Jul 16 00:00:06.605517 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 16 00:00:06.608483 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 16 00:00:06.631888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 16 00:00:06.633741 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 16 00:00:06.642630 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 16 00:00:06.642647 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 16 00:00:06.651610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:00:06.654292 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 16 00:00:06.659731 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 16 00:00:06.669602 kernel: loop2: detected capacity change from 0 to 146240 Jul 16 00:00:06.701961 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 16 00:00:06.708793 kernel: loop3: detected capacity change from 0 to 113872 Jul 16 00:00:06.709535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:00:06.722429 kernel: loop4: detected capacity change from 0 to 221472 Jul 16 00:00:06.736405 kernel: loop5: detected capacity change from 0 to 146240 Jul 16 00:00:06.740846 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 16 00:00:06.741244 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 16 00:00:06.750605 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:00:06.756175 (sd-merge)[1268]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 16 00:00:06.756954 (sd-merge)[1268]: Merged extensions into '/usr'. Jul 16 00:00:06.762637 systemd[1]: Reload requested from client PID 1238 ('systemd-sysext') (unit systemd-sysext.service)... Jul 16 00:00:06.762660 systemd[1]: Reloading... Jul 16 00:00:06.862409 zram_generator::config[1300]: No configuration found. Jul 16 00:00:07.020788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:00:07.122963 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 00:00:07.123679 systemd[1]: Reloading finished in 359 ms. Jul 16 00:00:07.148480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 16 00:00:07.162732 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 00:00:07.163065 systemd[1]: Starting ensure-sysext.service... Jul 16 00:00:07.165942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:00:07.190704 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 16 00:00:07.198879 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Jul 16 00:00:07.198904 systemd[1]: Reloading... Jul 16 00:00:07.223549 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 16 00:00:07.224361 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 16 00:00:07.225162 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 00:00:07.226898 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 16 00:00:07.228398 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 00:00:07.228982 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 16 00:00:07.229095 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 16 00:00:07.235895 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:00:07.235917 systemd-tmpfiles[1334]: Skipping /boot Jul 16 00:00:07.262433 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:00:07.262458 systemd-tmpfiles[1334]: Skipping /boot Jul 16 00:00:07.263398 zram_generator::config[1359]: No configuration found. Jul 16 00:00:07.428785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:00:07.516290 systemd[1]: Reloading finished in 316 ms. Jul 16 00:00:07.542279 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 16 00:00:07.557330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:00:07.566837 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:00:07.569364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 16 00:00:07.571752 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 16 00:00:07.580467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:00:07.585228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:00:07.588203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 16 00:00:07.592570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:07.592742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:00:07.595576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:00:07.598668 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:00:07.604758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:00:07.606531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:00:07.606639 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:00:07.609729 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 16 00:00:07.610849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:07.612102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:00:07.613196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:00:07.617719 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:00:07.618179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:00:07.620288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:00:07.620920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:00:07.627944 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 16 00:00:07.635738 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 16 00:00:07.638482 systemd-udevd[1405]: Using default interface naming scheme 'v255'. Jul 16 00:00:07.640719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:07.640971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:00:07.642586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:00:07.645360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:00:07.651708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:00:07.657850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:00:07.659067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:00:07.659176 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:00:07.661632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 16 00:00:07.663129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:00:07.665709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:00:07.665935 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:00:07.668150 augenrules[1440]: No rules Jul 16 00:00:07.667597 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:00:07.667813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:00:07.669406 systemd[1]: Finished ensure-sysext.service. Jul 16 00:00:07.671324 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:00:07.671608 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:00:07.673187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:00:07.673423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:00:07.680753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:00:07.685229 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:00:07.687461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:00:07.698540 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 16 00:00:07.700035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 16 00:00:07.701898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 16 00:00:07.706013 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:00:07.706233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:00:07.709002 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 16 00:00:07.718903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:00:07.718938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 00:00:07.750000 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 16 00:00:07.826463 systemd-resolved[1404]: Positive Trust Anchors: Jul 16 00:00:07.826484 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:00:07.826516 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:00:07.830781 systemd-resolved[1404]: Defaulting to hostname 'linux'. Jul 16 00:00:07.834485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:00:07.835883 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:00:07.862629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 16 00:00:07.866228 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 16 00:00:07.872397 kernel: mousedev: PS/2 mouse device common for all mice Jul 16 00:00:07.875409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 16 00:00:07.880484 kernel: ACPI: button: Power Button [PWRF] Jul 16 00:00:07.910161 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 16 00:00:07.912736 systemd-networkd[1457]: lo: Link UP Jul 16 00:00:07.912750 systemd-networkd[1457]: lo: Gained carrier Jul 16 00:00:07.915416 systemd-networkd[1457]: Enumeration completed Jul 16 00:00:07.915841 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:00:07.917170 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 16 00:00:07.917189 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:00:07.917298 systemd[1]: Reached target network.target - Network. Jul 16 00:00:07.919177 systemd-networkd[1457]: eth0: Link UP Jul 16 00:00:07.919656 systemd-networkd[1457]: eth0: Gained carrier Jul 16 00:00:07.919696 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 16 00:00:07.920534 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 16 00:00:07.923146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 16 00:00:07.934426 systemd-networkd[1457]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 16 00:00:07.963071 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 16 00:00:07.964535 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 16 00:00:07.965864 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:00:07.967011 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 16 00:00:07.968460 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 16 00:00:07.969214 systemd-timesyncd[1467]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 16 00:00:07.969276 systemd-timesyncd[1467]: Initial clock synchronization to Wed 2025-07-16 00:00:08.150824 UTC. Jul 16 00:00:07.970762 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 16 00:00:07.972497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 16 00:00:07.974553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 00:00:07.974586 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:00:07.975515 systemd[1]: Reached target time-set.target - System Time Set. Jul 16 00:00:07.976708 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 16 00:00:07.979028 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 16 00:00:08.044445 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:00:08.049197 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 16 00:00:08.052070 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 16 00:00:08.055725 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 16 00:00:08.057191 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 16 00:00:08.058594 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 16 00:00:08.061204 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 16 00:00:08.061548 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 16 00:00:08.081226 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 16 00:00:08.106368 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 16 00:00:08.109032 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 16 00:00:08.119059 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:00:08.120112 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:00:08.121116 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:00:08.121145 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:00:08.124505 systemd[1]: Starting containerd.service - containerd container runtime... Jul 16 00:00:08.131638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 16 00:00:08.138573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 16 00:00:08.144649 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 16 00:00:08.150598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 16 00:00:08.151793 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 16 00:00:08.161328 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 16 00:00:08.164997 jq[1527]: false Jul 16 00:00:08.168670 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 16 00:00:08.173272 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 16 00:00:08.176133 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 16 00:00:08.186936 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 16 00:00:08.194638 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 16 00:00:08.195750 extend-filesystems[1528]: Found /dev/vda6 Jul 16 00:00:08.195602 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jul 16 00:00:08.199276 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Jul 16 00:00:08.197221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:00:08.198601 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 16 00:00:08.199084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 16 00:00:08.200262 systemd[1]: Starting update-engine.service - Update Engine... Jul 16 00:00:08.204591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 16 00:00:08.207068 kernel: kvm_amd: TSC scaling supported Jul 16 00:00:08.207115 kernel: kvm_amd: Nested Virtualization enabled Jul 16 00:00:08.207138 kernel: kvm_amd: Nested Paging enabled Jul 16 00:00:08.207158 kernel: kvm_amd: LBR virtualization supported Jul 16 00:00:08.207179 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 16 00:00:08.207211 kernel: kvm_amd: Virtual GIF supported Jul 16 00:00:08.211667 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Jul 16 00:00:08.211656 oslogin_cache_refresh[1529]: Failure getting users, quitting Jul 16 00:00:08.211970 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:00:08.211970 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Jul 16 00:00:08.211680 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:00:08.211737 oslogin_cache_refresh[1529]: Refreshing group entry cache Jul 16 00:00:08.221940 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Jul 16 00:00:08.221940 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:00:08.221920 oslogin_cache_refresh[1529]: Failure getting groups, quitting Jul 16 00:00:08.221937 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:00:08.222888 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 16 00:00:08.224967 jq[1543]: true Jul 16 00:00:08.225089 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 00:00:08.225449 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 16 00:00:08.226326 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 16 00:00:08.227664 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 16 00:00:08.229296 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 00:00:08.229810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 16 00:00:08.230851 extend-filesystems[1528]: Found /dev/vda9 Jul 16 00:00:08.234289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 00:00:08.235747 extend-filesystems[1528]: Checking size of /dev/vda9 Jul 16 00:00:08.250910 update_engine[1540]: I20250716 00:00:08.250167 1540 main.cc:92] Flatcar Update Engine starting Jul 16 00:00:08.252300 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 16 00:00:08.295783 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 16 00:00:08.305465 tar[1552]: linux-amd64/helm Jul 16 00:00:08.307139 jq[1556]: true Jul 16 00:00:08.383736 kernel: EDAC MC: Ver: 3.0.0 Jul 16 00:00:08.386603 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Jul 16 00:00:08.386632 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 16 00:00:08.387896 dbus-daemon[1525]: [system] SELinux support is enabled Jul 16 00:00:08.388133 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 16 00:00:08.395617 update_engine[1540]: I20250716 00:00:08.395534 1540 update_check_scheduler.cc:74] Next update check in 3m57s Jul 16 00:00:08.396824 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 00:00:08.396861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 16 00:00:08.397331 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 00:00:08.397349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 16 00:00:08.397825 systemd-logind[1537]: New seat seat0. Jul 16 00:00:08.400261 systemd[1]: Started update-engine.service - Update Engine. Jul 16 00:00:08.405643 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 16 00:00:08.408273 systemd[1]: Started systemd-logind.service - User Login Management. Jul 16 00:00:08.430168 extend-filesystems[1528]: Resized partition /dev/vda9 Jul 16 00:00:08.466588 extend-filesystems[1592]: resize2fs 1.47.2 (1-Jan-2025) Jul 16 00:00:08.593516 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 16 00:00:08.630082 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 00:00:08.704123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:00:08.916397 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 00:00:08.941562 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 16 00:00:08.945182 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 16 00:00:08.995430 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 00:00:08.995760 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 16 00:00:08.999950 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 16 00:00:10.114548 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 16 00:00:09.092905 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 16 00:00:10.115023 containerd[1557]: time="2025-07-16T00:00:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 16 00:00:09.097033 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 16 00:00:10.115522 extend-filesystems[1592]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 16 00:00:10.115522 extend-filesystems[1592]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 16 00:00:10.115522 extend-filesystems[1592]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 16 00:00:09.099561 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 16 00:00:10.122738 containerd[1557]: time="2025-07-16T00:00:10.120185026Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 16 00:00:10.122780 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Jul 16 00:00:09.100871 systemd[1]: Reached target getty.target - Login Prompts. Jul 16 00:00:10.141770 tar[1552]: linux-amd64/LICENSE Jul 16 00:00:10.141770 tar[1552]: linux-amd64/README.md Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.137753987Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.523µs" Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.137806664Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.137831474Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138082049Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138101589Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138139873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138227380Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138243424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138636007Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138657198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138676076Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142052 containerd[1557]: time="2025-07-16T00:00:10.138687502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 16 00:00:09.236583 systemd-networkd[1457]: eth0: Gained IPv6LL Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.138820623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139102510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139139521Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139153169Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139192423Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139515286Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 16 00:00:10.142589 containerd[1557]: time="2025-07-16T00:00:10.139596198Z" level=info msg="metadata content store policy set" policy=shared Jul 16 00:00:09.240568 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 16 00:00:09.242770 systemd[1]: Reached target network-online.target - Network is Online. Jul 16 00:00:09.245830 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 16 00:00:09.248749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:09.264045 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 16 00:00:09.285761 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 16 00:00:09.286106 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 16 00:00:09.287681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 16 00:00:10.117960 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 00:00:10.118277 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 16 00:00:10.143094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 16 00:00:10.154120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 16 00:00:10.339636 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:00:10.341886 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 16 00:00:10.344021 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 16 00:00:10.576638 containerd[1557]: time="2025-07-16T00:00:10.576519004Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 16 00:00:10.576638 containerd[1557]: time="2025-07-16T00:00:10.576642319Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576662980Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576678627Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576696158Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576708482Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576728073Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576742322Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576754839Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576767417Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576778620Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 16 00:00:10.576799 containerd[1557]: time="2025-07-16T00:00:10.576796610Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 16 00:00:10.577062 containerd[1557]: time="2025-07-16T00:00:10.577004405Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 16 00:00:10.577062 containerd[1557]: time="2025-07-16T00:00:10.577032558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 16 00:00:10.577062 containerd[1557]: time="2025-07-16T00:00:10.577050640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577063881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577080700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577092931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577107293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577118780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 16 00:00:10.577135 containerd[1557]: time="2025-07-16T00:00:10.577134222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 16 00:00:10.577300 containerd[1557]: time="2025-07-16T00:00:10.577149655Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 16 00:00:10.577300 containerd[1557]: time="2025-07-16T00:00:10.577162131Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 16 00:00:10.577300 containerd[1557]: time="2025-07-16T00:00:10.577271645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 16 00:00:10.577477 containerd[1557]: time="2025-07-16T00:00:10.577302519Z" level=info msg="Start snapshots syncer" Jul 16 00:00:10.577477 containerd[1557]: time="2025-07-16T00:00:10.577339122Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 16 00:00:10.577896 containerd[1557]: time="2025-07-16T00:00:10.577836489Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578035365Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578145154Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578269774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578298874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578330860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578346415Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578365648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578382580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578418734Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578461738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578477058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578500604Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578538124Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:00:10.578894 containerd[1557]: time="2025-07-16T00:00:10.578561007Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578573280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578590160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578605164Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578621493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578639697Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578672274Z" level=info msg="runtime interface created" Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578679654Z" level=info msg="created NRI interface" Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578694699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578711293Z" level=info msg="Connect containerd service" Jul 16 00:00:10.579390 containerd[1557]: time="2025-07-16T00:00:10.578752208Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 16 00:00:10.581887 containerd[1557]: time="2025-07-16T00:00:10.581857015Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:00:10.681562 containerd[1557]: time="2025-07-16T00:00:10.681503446Z" level=info msg="Start subscribing containerd event" Jul 16 00:00:10.681702 containerd[1557]: time="2025-07-16T00:00:10.681573421Z" level=info msg="Start recovering state" Jul 16 00:00:10.681702 containerd[1557]: time="2025-07-16T00:00:10.681699224Z" level=info msg="Start event monitor" Jul 16 00:00:10.681753 containerd[1557]: time="2025-07-16T00:00:10.681715879Z" level=info msg="Start cni network conf syncer for default" Jul 16 00:00:10.681753 containerd[1557]: time="2025-07-16T00:00:10.681729426Z" level=info msg="Start streaming server" Jul 16 00:00:10.681753 containerd[1557]: time="2025-07-16T00:00:10.681743981Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 16 00:00:10.681753 containerd[1557]: time="2025-07-16T00:00:10.681751483Z" level=info msg="runtime interface starting up..." Jul 16 00:00:10.681866 containerd[1557]: time="2025-07-16T00:00:10.681757955Z" level=info msg="starting plugins..." Jul 16 00:00:10.681866 containerd[1557]: time="2025-07-16T00:00:10.681771237Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 00:00:10.681866 containerd[1557]: time="2025-07-16T00:00:10.681837961Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 00:00:10.681947 containerd[1557]: time="2025-07-16T00:00:10.681773133Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 16 00:00:10.682061 containerd[1557]: time="2025-07-16T00:00:10.682027928Z" level=info msg="containerd successfully booted in 1.390731s" Jul 16 00:00:10.682165 systemd[1]: Started containerd.service - containerd container runtime. Jul 16 00:00:11.053082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:11.054896 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 16 00:00:11.056200 systemd[1]: Startup finished in 3.066s (kernel) + 6.599s (initrd) + 5.844s (userspace) = 15.510s. Jul 16 00:00:11.101833 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:00:11.543278 kubelet[1668]: E0716 00:00:11.543194 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:00:11.547614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:00:11.547855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:00:11.548291 systemd[1]: kubelet.service: Consumed 1.029s CPU time, 265.8M memory peak. Jul 16 00:00:12.262367 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 16 00:00:12.263747 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:60114.service - OpenSSH per-connection server daemon (10.0.0.1:60114). Jul 16 00:00:12.327633 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 60114 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:12.329598 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:12.336867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 16 00:00:12.338155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 16 00:00:12.345939 systemd-logind[1537]: New session 1 of user core. Jul 16 00:00:12.364251 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 16 00:00:12.367739 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 16 00:00:12.392815 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 00:00:12.395234 systemd-logind[1537]: New session c1 of user core. Jul 16 00:00:12.549363 systemd[1685]: Queued start job for default target default.target. Jul 16 00:00:12.561725 systemd[1685]: Created slice app.slice - User Application Slice. Jul 16 00:00:12.561753 systemd[1685]: Reached target paths.target - Paths. Jul 16 00:00:12.561795 systemd[1685]: Reached target timers.target - Timers. Jul 16 00:00:12.563478 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 16 00:00:12.575589 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 16 00:00:12.575721 systemd[1685]: Reached target sockets.target - Sockets. Jul 16 00:00:12.575767 systemd[1685]: Reached target basic.target - Basic System. Jul 16 00:00:12.575806 systemd[1685]: Reached target default.target - Main User Target. Jul 16 00:00:12.575839 systemd[1685]: Startup finished in 173ms. Jul 16 00:00:12.576231 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 16 00:00:12.585513 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 16 00:00:12.656222 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:60118.service - OpenSSH per-connection server daemon (10.0.0.1:60118). Jul 16 00:00:12.726115 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 60118 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:12.728091 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:12.733520 systemd-logind[1537]: New session 2 of user core. Jul 16 00:00:12.748646 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 16 00:00:12.803597 sshd[1698]: Connection closed by 10.0.0.1 port 60118 Jul 16 00:00:12.803906 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:12.812839 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:60118.service: Deactivated successfully. Jul 16 00:00:12.814884 systemd[1]: session-2.scope: Deactivated successfully. Jul 16 00:00:12.815687 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Jul 16 00:00:12.818960 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:60126.service - OpenSSH per-connection server daemon (10.0.0.1:60126). Jul 16 00:00:12.819767 systemd-logind[1537]: Removed session 2. Jul 16 00:00:12.866626 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 60126 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:12.868146 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:12.873002 systemd-logind[1537]: New session 3 of user core. Jul 16 00:00:12.887674 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 16 00:00:12.937298 sshd[1706]: Connection closed by 10.0.0.1 port 60126 Jul 16 00:00:12.937666 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:12.949972 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:60126.service: Deactivated successfully. Jul 16 00:00:12.951522 systemd[1]: session-3.scope: Deactivated successfully. Jul 16 00:00:12.952212 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Jul 16 00:00:12.954679 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). Jul 16 00:00:12.955403 systemd-logind[1537]: Removed session 3. Jul 16 00:00:13.006630 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:13.008425 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:13.013294 systemd-logind[1537]: New session 4 of user core. Jul 16 00:00:13.022628 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 16 00:00:13.078630 sshd[1714]: Connection closed by 10.0.0.1 port 60140 Jul 16 00:00:13.078867 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:13.091404 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:60140.service: Deactivated successfully. Jul 16 00:00:13.093491 systemd[1]: session-4.scope: Deactivated successfully. Jul 16 00:00:13.094301 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Jul 16 00:00:13.097009 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:60150.service - OpenSSH per-connection server daemon (10.0.0.1:60150). Jul 16 00:00:13.097797 systemd-logind[1537]: Removed session 4. Jul 16 00:00:13.151660 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 60150 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:13.153498 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:13.158818 systemd-logind[1537]: New session 5 of user core. Jul 16 00:00:13.172573 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 16 00:00:13.232662 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 00:00:13.232992 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:00:13.262010 sudo[1723]: pam_unix(sudo:session): session closed for user root Jul 16 00:00:13.263932 sshd[1722]: Connection closed by 10.0.0.1 port 60150 Jul 16 00:00:13.264297 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:13.283707 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:60150.service: Deactivated successfully. Jul 16 00:00:13.286217 systemd[1]: session-5.scope: Deactivated successfully. Jul 16 00:00:13.287476 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Jul 16 00:00:13.291170 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:60152.service - OpenSSH per-connection server daemon (10.0.0.1:60152). Jul 16 00:00:13.291897 systemd-logind[1537]: Removed session 5. Jul 16 00:00:13.342210 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 60152 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:13.343899 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:13.348789 systemd-logind[1537]: New session 6 of user core. Jul 16 00:00:13.359543 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 16 00:00:13.416236 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 00:00:13.416664 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:00:13.815508 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 16 00:00:13.822034 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 16 00:00:13.822338 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:00:13.833502 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:00:13.880943 augenrules[1756]: No rules Jul 16 00:00:13.882690 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:00:13.882977 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:00:13.884337 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 16 00:00:13.885981 sshd[1732]: Connection closed by 10.0.0.1 port 60152 Jul 16 00:00:13.886350 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:13.905358 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:60152.service: Deactivated successfully. Jul 16 00:00:13.907206 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 00:00:13.908009 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Jul 16 00:00:13.910866 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:60154.service - OpenSSH per-connection server daemon (10.0.0.1:60154). Jul 16 00:00:13.911520 systemd-logind[1537]: Removed session 6. Jul 16 00:00:13.978128 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 60154 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:00:13.979891 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:13.985000 systemd-logind[1537]: New session 7 of user core. Jul 16 00:00:13.998589 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 16 00:00:14.054374 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 00:00:14.054800 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:00:14.728750 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 16 00:00:14.747927 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 16 00:00:15.296459 dockerd[1788]: time="2025-07-16T00:00:15.296347761Z" level=info msg="Starting up" Jul 16 00:00:15.298686 dockerd[1788]: time="2025-07-16T00:00:15.298658744Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 16 00:00:18.178463 dockerd[1788]: time="2025-07-16T00:00:18.178367605Z" level=info msg="Loading containers: start." Jul 16 00:00:18.435410 kernel: Initializing XFRM netlink socket Jul 16 00:00:19.050896 systemd-networkd[1457]: docker0: Link UP Jul 16 00:00:19.580538 dockerd[1788]: time="2025-07-16T00:00:19.580448779Z" level=info msg="Loading containers: done." Jul 16 00:00:19.596694 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2123608100-merged.mount: Deactivated successfully. Jul 16 00:00:19.815317 dockerd[1788]: time="2025-07-16T00:00:19.815236568Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 00:00:19.815533 dockerd[1788]: time="2025-07-16T00:00:19.815369699Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 16 00:00:19.815577 dockerd[1788]: time="2025-07-16T00:00:19.815544605Z" level=info msg="Initializing buildkit" Jul 16 00:00:20.197748 dockerd[1788]: time="2025-07-16T00:00:20.197672380Z" level=info msg="Completed buildkit initialization" Jul 16 00:00:20.204869 dockerd[1788]: time="2025-07-16T00:00:20.204783744Z" level=info msg="Daemon has completed initialization" Jul 16 00:00:20.205031 dockerd[1788]: time="2025-07-16T00:00:20.204922736Z" level=info msg="API listen on /run/docker.sock" Jul 16 00:00:20.205254 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 16 00:00:21.766363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 00:00:21.768242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:21.991022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:21.996114 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:00:22.041424 kubelet[2007]: E0716 00:00:22.041262 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:00:22.048030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:00:22.048273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:00:22.048767 systemd[1]: kubelet.service: Consumed 228ms CPU time, 111.4M memory peak. Jul 16 00:00:22.062057 containerd[1557]: time="2025-07-16T00:00:22.062008948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 16 00:00:26.141508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52365578.mount: Deactivated successfully. Jul 16 00:00:29.326627 containerd[1557]: time="2025-07-16T00:00:29.326564956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:29.327370 containerd[1557]: time="2025-07-16T00:00:29.327332257Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Jul 16 00:00:29.328738 containerd[1557]: time="2025-07-16T00:00:29.328708765Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:29.331242 containerd[1557]: time="2025-07-16T00:00:29.331203260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:29.332149 containerd[1557]: time="2025-07-16T00:00:29.332101154Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 7.270030341s" Jul 16 00:00:29.332149 containerd[1557]: time="2025-07-16T00:00:29.332149170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 16 00:00:29.332898 containerd[1557]: time="2025-07-16T00:00:29.332857248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 16 00:00:31.307633 containerd[1557]: time="2025-07-16T00:00:31.307560424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:31.401899 containerd[1557]: time="2025-07-16T00:00:31.401790408Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Jul 16 00:00:31.432739 containerd[1557]: time="2025-07-16T00:00:31.432689054Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:31.473782 containerd[1557]: time="2025-07-16T00:00:31.473699353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:31.474831 containerd[1557]: time="2025-07-16T00:00:31.474784376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.141877029s" Jul 16 00:00:31.474905 containerd[1557]: time="2025-07-16T00:00:31.474834733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 16 00:00:31.475600 containerd[1557]: time="2025-07-16T00:00:31.475560877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 16 00:00:32.264779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 00:00:32.267006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:32.766040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:32.771775 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:00:32.836392 kubelet[2081]: E0716 00:00:32.836294 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:00:32.839993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:00:32.840188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:00:32.840570 systemd[1]: kubelet.service: Consumed 231ms CPU time, 111.4M memory peak. Jul 16 00:00:34.849010 containerd[1557]: time="2025-07-16T00:00:34.848926311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:34.849626 containerd[1557]: time="2025-07-16T00:00:34.849588709Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Jul 16 00:00:34.850738 containerd[1557]: time="2025-07-16T00:00:34.850682289Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:34.853396 containerd[1557]: time="2025-07-16T00:00:34.853314606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:34.854479 containerd[1557]: time="2025-07-16T00:00:34.854439777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 3.378839287s" Jul 16 00:00:34.854545 containerd[1557]: time="2025-07-16T00:00:34.854477193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 16 00:00:34.855134 containerd[1557]: time="2025-07-16T00:00:34.855084680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 16 00:00:35.808557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888716527.mount: Deactivated successfully. Jul 16 00:00:36.123556 containerd[1557]: time="2025-07-16T00:00:36.123359998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:36.125108 containerd[1557]: time="2025-07-16T00:00:36.125052452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Jul 16 00:00:36.126683 containerd[1557]: time="2025-07-16T00:00:36.126628706Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:36.129084 containerd[1557]: time="2025-07-16T00:00:36.129030654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:36.129614 containerd[1557]: time="2025-07-16T00:00:36.129556794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.274437526s" Jul 16 00:00:36.129614 containerd[1557]: time="2025-07-16T00:00:36.129601852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 16 00:00:36.130167 containerd[1557]: time="2025-07-16T00:00:36.130093027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 00:00:36.690458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625306725.mount: Deactivated successfully. Jul 16 00:00:38.854778 containerd[1557]: time="2025-07-16T00:00:38.854701601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:38.855501 containerd[1557]: time="2025-07-16T00:00:38.855459574Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 16 00:00:38.856642 containerd[1557]: time="2025-07-16T00:00:38.856592445Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:38.859710 containerd[1557]: time="2025-07-16T00:00:38.859650058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:38.860895 containerd[1557]: time="2025-07-16T00:00:38.860845663Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.730706476s" Jul 16 00:00:38.860961 containerd[1557]: time="2025-07-16T00:00:38.860895267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 16 00:00:38.861481 containerd[1557]: time="2025-07-16T00:00:38.861454205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 00:00:39.508234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742644519.mount: Deactivated successfully. Jul 16 00:00:39.518832 containerd[1557]: time="2025-07-16T00:00:39.518744274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:00:39.520542 containerd[1557]: time="2025-07-16T00:00:39.520426604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 16 00:00:39.522339 containerd[1557]: time="2025-07-16T00:00:39.522261086Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:00:39.524923 containerd[1557]: time="2025-07-16T00:00:39.524837585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:00:39.525326 containerd[1557]: time="2025-07-16T00:00:39.525274472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 663.707601ms" Jul 16 00:00:39.525326 containerd[1557]: time="2025-07-16T00:00:39.525319944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 16 00:00:39.525853 containerd[1557]: time="2025-07-16T00:00:39.525818048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 16 00:00:40.672608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973646350.mount: Deactivated successfully. Jul 16 00:00:43.014554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 16 00:00:43.017606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:43.088451 containerd[1557]: time="2025-07-16T00:00:43.088334187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:43.230162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:43.246678 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:00:43.281397 kubelet[2221]: E0716 00:00:43.281223 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:00:43.285419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:00:43.285636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:00:43.286031 systemd[1]: kubelet.service: Consumed 221ms CPU time, 110.3M memory peak. Jul 16 00:00:43.538366 containerd[1557]: time="2025-07-16T00:00:43.538203662Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 16 00:00:43.664347 containerd[1557]: time="2025-07-16T00:00:43.664244115Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:43.769025 containerd[1557]: time="2025-07-16T00:00:43.768960761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:00:43.770365 containerd[1557]: time="2025-07-16T00:00:43.770315190Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.244465713s" Jul 16 00:00:43.770365 containerd[1557]: time="2025-07-16T00:00:43.770356401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 16 00:00:47.152899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:47.153107 systemd[1]: kubelet.service: Consumed 221ms CPU time, 110.3M memory peak. Jul 16 00:00:47.155393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:47.179543 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Jul 16 00:00:47.179553 systemd[1]: Reloading... Jul 16 00:00:47.274414 zram_generator::config[2306]: No configuration found. Jul 16 00:00:48.010235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:00:48.156683 systemd[1]: Reloading finished in 976 ms. Jul 16 00:00:48.237306 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 16 00:00:48.237463 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 16 00:00:48.237816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:48.237872 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.3M memory peak. Jul 16 00:00:48.239769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:48.737187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:48.753034 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:00:48.789962 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:00:48.789962 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:00:48.789962 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:00:48.790541 kubelet[2348]: I0716 00:00:48.789995 2348 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:00:49.005964 kubelet[2348]: I0716 00:00:49.005834 2348 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:00:49.005964 kubelet[2348]: I0716 00:00:49.005871 2348 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:00:49.006225 kubelet[2348]: I0716 00:00:49.006191 2348 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:00:49.032601 kubelet[2348]: E0716 00:00:49.032548 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:49.033463 kubelet[2348]: I0716 00:00:49.033439 2348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:00:49.039975 kubelet[2348]: I0716 00:00:49.039951 2348 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:00:49.046730 kubelet[2348]: I0716 00:00:49.046698 2348 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:00:49.047313 kubelet[2348]: I0716 00:00:49.047277 2348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:00:49.047483 kubelet[2348]: I0716 00:00:49.047448 2348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:00:49.047663 kubelet[2348]: I0716 00:00:49.047473 2348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:00:49.047772 kubelet[2348]: I0716 00:00:49.047667 2348 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:00:49.047772 kubelet[2348]: I0716 00:00:49.047676 2348 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:00:49.047821 kubelet[2348]: I0716 00:00:49.047792 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:00:49.049842 kubelet[2348]: I0716 00:00:49.049811 2348 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:00:49.049842 kubelet[2348]: I0716 00:00:49.049837 2348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:00:49.049935 kubelet[2348]: I0716 00:00:49.049876 2348 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:00:49.049935 kubelet[2348]: I0716 00:00:49.049908 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:00:49.055621 kubelet[2348]: W0716 00:00:49.055559 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:49.056544 kubelet[2348]: E0716 00:00:49.055651 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:49.056544 kubelet[2348]: I0716 00:00:49.055747 2348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:00:49.056544 kubelet[2348]: W0716 00:00:49.055977 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:49.056544 kubelet[2348]: E0716 00:00:49.056054 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:49.056544 kubelet[2348]: I0716 00:00:49.056336 2348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:00:49.056998 kubelet[2348]: W0716 00:00:49.056978 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 00:00:49.059270 kubelet[2348]: I0716 00:00:49.059248 2348 server.go:1274] "Started kubelet" Jul 16 00:00:49.059476 kubelet[2348]: I0716 00:00:49.059425 2348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:00:49.060322 kubelet[2348]: I0716 00:00:49.059984 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:00:49.060322 kubelet[2348]: I0716 00:00:49.060316 2348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:00:49.061093 kubelet[2348]: I0716 00:00:49.060464 2348 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:00:49.062369 kubelet[2348]: I0716 00:00:49.062193 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:00:49.062855 kubelet[2348]: I0716 00:00:49.062824 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:00:49.063652 kubelet[2348]: E0716 00:00:49.063596 2348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:00:49.065076 kubelet[2348]: E0716 00:00:49.063121 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185292496cfb5962 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-16 00:00:49.059223906 +0000 UTC m=+0.301865983,LastTimestamp:2025-07-16 00:00:49.059223906 +0000 UTC m=+0.301865983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 16 00:00:49.065076 kubelet[2348]: E0716 00:00:49.064326 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.065076 kubelet[2348]: I0716 00:00:49.064373 2348 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:00:49.065076 kubelet[2348]: I0716 00:00:49.064648 2348 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:00:49.065076 kubelet[2348]: I0716 00:00:49.064691 2348 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:00:49.065076 kubelet[2348]: E0716 00:00:49.064974 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Jul 16 00:00:49.065348 kubelet[2348]: W0716 00:00:49.065073 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:49.065348 kubelet[2348]: E0716 00:00:49.065119 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:49.065348 kubelet[2348]: I0716 00:00:49.065191 2348 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:00:49.065348 kubelet[2348]: I0716 00:00:49.065303 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:00:49.066393 kubelet[2348]: I0716 00:00:49.066356 2348 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:00:49.080418 kubelet[2348]: I0716 00:00:49.080164 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:00:49.080418 kubelet[2348]: I0716 00:00:49.080180 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:00:49.080418 kubelet[2348]: I0716 00:00:49.080196 2348 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:00:49.080998 kubelet[2348]: I0716 00:00:49.080954 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:00:49.082482 kubelet[2348]: I0716 00:00:49.082445 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:00:49.082482 kubelet[2348]: I0716 00:00:49.082472 2348 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:00:49.082567 kubelet[2348]: I0716 00:00:49.082504 2348 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:00:49.082567 kubelet[2348]: E0716 00:00:49.082549 2348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:00:49.086141 kubelet[2348]: W0716 00:00:49.086069 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:49.086252 kubelet[2348]: E0716 00:00:49.086227 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:49.165122 kubelet[2348]: E0716 00:00:49.165068 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.183521 kubelet[2348]: E0716 00:00:49.183454 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 16 00:00:49.265181 kubelet[2348]: E0716 00:00:49.265147 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.265493 kubelet[2348]: E0716 00:00:49.265460 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Jul 16 00:00:49.365846 kubelet[2348]: E0716 00:00:49.365783 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.384014 kubelet[2348]: E0716 00:00:49.383963 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 16 00:00:49.466426 kubelet[2348]: E0716 00:00:49.466364 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.523798 kubelet[2348]: I0716 00:00:49.523652 2348 policy_none.go:49] "None policy: Start" Jul 16 00:00:49.524503 kubelet[2348]: I0716 00:00:49.524483 2348 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:00:49.524503 kubelet[2348]: I0716 00:00:49.524505 2348 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:00:49.533161 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 16 00:00:49.547400 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 16 00:00:49.551556 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 16 00:00:49.566996 kubelet[2348]: E0716 00:00:49.566945 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:49.570576 kubelet[2348]: I0716 00:00:49.570525 2348 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:00:49.570841 kubelet[2348]: I0716 00:00:49.570817 2348 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:00:49.570909 kubelet[2348]: I0716 00:00:49.570835 2348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:00:49.571087 kubelet[2348]: I0716 00:00:49.571057 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:00:49.572667 kubelet[2348]: E0716 00:00:49.572637 2348 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 16 00:00:49.666314 kubelet[2348]: E0716 00:00:49.666252 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Jul 16 00:00:49.672978 kubelet[2348]: I0716 00:00:49.672922 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 16 00:00:49.673554 kubelet[2348]: E0716 00:00:49.673515 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 16 00:00:49.793698 systemd[1]: Created slice kubepods-burstable-pod596d20fec5e53dc8bb80e19189f01be6.slice - libcontainer container kubepods-burstable-pod596d20fec5e53dc8bb80e19189f01be6.slice. Jul 16 00:00:49.811167 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Jul 16 00:00:49.825973 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Jul 16 00:00:49.868964 kubelet[2348]: I0716 00:00:49.868902 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:49.868964 kubelet[2348]: I0716 00:00:49.868942 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 16 00:00:49.868964 kubelet[2348]: I0716 00:00:49.868958 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:49.868964 kubelet[2348]: I0716 00:00:49.868973 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:49.869483 kubelet[2348]: I0716 00:00:49.868988 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:49.869483 kubelet[2348]: I0716 00:00:49.869002 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:49.869483 kubelet[2348]: I0716 00:00:49.869033 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:49.869483 kubelet[2348]: I0716 00:00:49.869059 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:49.869483 kubelet[2348]: I0716 00:00:49.869088 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:49.874983 kubelet[2348]: I0716 00:00:49.874965 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 16 00:00:49.875279 kubelet[2348]: E0716 00:00:49.875249 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 16 00:00:50.109810 containerd[1557]: time="2025-07-16T00:00:50.109698631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:596d20fec5e53dc8bb80e19189f01be6,Namespace:kube-system,Attempt:0,}" Jul 16 00:00:50.124334 containerd[1557]: time="2025-07-16T00:00:50.124287923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Jul 16 00:00:50.129292 containerd[1557]: time="2025-07-16T00:00:50.129234286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Jul 16 00:00:50.134632 containerd[1557]: time="2025-07-16T00:00:50.134591020Z" level=info msg="connecting to shim 7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a" address="unix:///run/containerd/s/b65aadd5b4748ca80f65793f9bf2416e5344d1b780a8327cf8efb77fbcc3644d" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:00:50.189422 containerd[1557]: time="2025-07-16T00:00:50.189302340Z" level=info msg="connecting to shim f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7" address="unix:///run/containerd/s/cec584d76856223c584b174acb1a139be5639912719699880353c82b643270c2" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:00:50.199836 containerd[1557]: time="2025-07-16T00:00:50.199759561Z" level=info msg="connecting to shim 45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a" address="unix:///run/containerd/s/b7f958839839d1c8ea3371357a91ad30637af2c16f0d4ade6c6906cc3f87cbc6" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:00:50.285066 kubelet[2348]: W0716 00:00:50.249827 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:50.285066 kubelet[2348]: E0716 00:00:50.249933 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:50.285066 kubelet[2348]: W0716 00:00:50.260660 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:50.285066 kubelet[2348]: E0716 00:00:50.260722 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:50.286054 kubelet[2348]: I0716 00:00:50.286033 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 16 00:00:50.286612 kubelet[2348]: E0716 00:00:50.286588 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jul 16 00:00:50.301713 systemd[1]: Started cri-containerd-45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a.scope - libcontainer container 45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a. Jul 16 00:00:50.303133 systemd[1]: Started cri-containerd-7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a.scope - libcontainer container 7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a. Jul 16 00:00:50.350503 systemd[1]: Started cri-containerd-f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7.scope - libcontainer container f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7. Jul 16 00:00:50.403534 containerd[1557]: time="2025-07-16T00:00:50.403362669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:596d20fec5e53dc8bb80e19189f01be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a\"" Jul 16 00:00:50.409856 containerd[1557]: time="2025-07-16T00:00:50.409796367Z" level=info msg="CreateContainer within sandbox \"7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 00:00:50.410337 containerd[1557]: time="2025-07-16T00:00:50.410296147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a\"" Jul 16 00:00:50.412976 containerd[1557]: time="2025-07-16T00:00:50.412322023Z" level=info msg="CreateContainer within sandbox \"45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 00:00:50.422332 containerd[1557]: time="2025-07-16T00:00:50.422295398Z" level=info msg="Container 162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:00:50.428796 containerd[1557]: time="2025-07-16T00:00:50.428748015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7\"" Jul 16 00:00:50.430829 containerd[1557]: time="2025-07-16T00:00:50.430791899Z" level=info msg="CreateContainer within sandbox \"f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 00:00:50.434272 containerd[1557]: time="2025-07-16T00:00:50.434244213Z" level=info msg="Container 6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:00:50.436307 containerd[1557]: time="2025-07-16T00:00:50.436269858Z" level=info msg="CreateContainer within sandbox \"45abd7dda50d2c40b4f38df0aeb4f9880f1d20d9fe7094a8be1128b78cd2d28a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d\"" Jul 16 00:00:50.436819 containerd[1557]: time="2025-07-16T00:00:50.436793519Z" level=info msg="StartContainer for \"162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d\"" Jul 16 00:00:50.438033 containerd[1557]: time="2025-07-16T00:00:50.438006267Z" level=info msg="connecting to shim 162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d" address="unix:///run/containerd/s/b7f958839839d1c8ea3371357a91ad30637af2c16f0d4ade6c6906cc3f87cbc6" protocol=ttrpc version=3 Jul 16 00:00:50.445094 containerd[1557]: time="2025-07-16T00:00:50.445045276Z" level=info msg="CreateContainer within sandbox \"7cd804d0d49e1c96f57a188a178f4ff3657669b58907249811bb84af4912c73a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3\"" Jul 16 00:00:50.445896 containerd[1557]: time="2025-07-16T00:00:50.445844726Z" level=info msg="StartContainer for \"6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3\"" Jul 16 00:00:50.447313 containerd[1557]: time="2025-07-16T00:00:50.447267145Z" level=info msg="connecting to shim 6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3" address="unix:///run/containerd/s/b65aadd5b4748ca80f65793f9bf2416e5344d1b780a8327cf8efb77fbcc3644d" protocol=ttrpc version=3 Jul 16 00:00:50.447641 containerd[1557]: time="2025-07-16T00:00:50.447620657Z" level=info msg="Container d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:00:50.456994 containerd[1557]: time="2025-07-16T00:00:50.456952062Z" level=info msg="CreateContainer within sandbox \"f3534bf5577fb82a122bba06cc1bf9c554e4517e879048fb3ad59cc922b5f5f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5\"" Jul 16 00:00:50.457478 containerd[1557]: time="2025-07-16T00:00:50.457452224Z" level=info msg="StartContainer for \"d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5\"" Jul 16 00:00:50.458417 containerd[1557]: time="2025-07-16T00:00:50.458391507Z" level=info msg="connecting to shim d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5" address="unix:///run/containerd/s/cec584d76856223c584b174acb1a139be5639912719699880353c82b643270c2" protocol=ttrpc version=3 Jul 16 00:00:50.459560 systemd[1]: Started cri-containerd-162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d.scope - libcontainer container 162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d. Jul 16 00:00:50.464186 kubelet[2348]: W0716 00:00:50.464111 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jul 16 00:00:50.464186 kubelet[2348]: E0716 00:00:50.464197 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:00:50.468837 kubelet[2348]: E0716 00:00:50.468751 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Jul 16 00:00:50.475536 systemd[1]: Started cri-containerd-6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3.scope - libcontainer container 6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3. Jul 16 00:00:50.481116 systemd[1]: Started cri-containerd-d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5.scope - libcontainer container d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5. Jul 16 00:00:50.537401 containerd[1557]: time="2025-07-16T00:00:50.535750200Z" level=info msg="StartContainer for \"162956d1736f622f253ab5712f579f1aa1646359fbf624058f2412e607ecc43d\" returns successfully" Jul 16 00:00:50.541163 containerd[1557]: time="2025-07-16T00:00:50.541119962Z" level=info msg="StartContainer for \"6f1587dcad175ee1782648d8f7acf655223485bd790ee46229b2bb9bef37eae3\" returns successfully" Jul 16 00:00:50.558517 containerd[1557]: time="2025-07-16T00:00:50.558453411Z" level=info msg="StartContainer for \"d51f0c6b8b29285c43fa91932aa889614aab52931b21a85301b1791c10d12fc5\" returns successfully" Jul 16 00:00:51.104139 kubelet[2348]: I0716 00:00:51.103710 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 16 00:00:52.051263 kubelet[2348]: I0716 00:00:52.051200 2348 apiserver.go:52] "Watching apiserver" Jul 16 00:00:52.065825 kubelet[2348]: I0716 00:00:52.065789 2348 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:00:52.068391 kubelet[2348]: I0716 00:00:52.068334 2348 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 16 00:00:52.068391 kubelet[2348]: E0716 00:00:52.068363 2348 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 16 00:00:52.208674 kubelet[2348]: E0716 00:00:52.208614 2348 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:52.209220 kubelet[2348]: E0716 00:00:52.208788 2348 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 16 00:00:53.548569 update_engine[1540]: I20250716 00:00:53.548455 1540 update_attempter.cc:509] Updating boot flags... Jul 16 00:00:55.275026 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... Jul 16 00:00:55.275043 systemd[1]: Reloading... Jul 16 00:00:55.357443 zram_generator::config[2690]: No configuration found. Jul 16 00:00:55.558702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:00:55.698413 systemd[1]: Reloading finished in 422 ms. Jul 16 00:00:55.722614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:55.745026 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:00:55.745396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:55.745466 systemd[1]: kubelet.service: Consumed 862ms CPU time, 130.9M memory peak. Jul 16 00:00:55.747943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:00:56.008998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:00:56.022676 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:00:56.064851 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:00:56.064851 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:00:56.064851 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:00:56.065262 kubelet[2732]: I0716 00:00:56.064923 2732 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:00:56.071365 kubelet[2732]: I0716 00:00:56.071324 2732 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:00:56.071365 kubelet[2732]: I0716 00:00:56.071348 2732 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:00:56.071627 kubelet[2732]: I0716 00:00:56.071599 2732 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:00:56.072846 kubelet[2732]: I0716 00:00:56.072818 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 00:00:56.074770 kubelet[2732]: I0716 00:00:56.074724 2732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:00:56.080848 kubelet[2732]: I0716 00:00:56.080821 2732 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:00:56.085760 kubelet[2732]: I0716 00:00:56.085736 2732 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:00:56.085855 kubelet[2732]: I0716 00:00:56.085838 2732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:00:56.086004 kubelet[2732]: I0716 00:00:56.085963 2732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:00:56.086162 kubelet[2732]: I0716 00:00:56.085990 2732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:00:56.086162 kubelet[2732]: I0716 00:00:56.086162 2732 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:00:56.086262 kubelet[2732]: I0716 00:00:56.086172 2732 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:00:56.086262 kubelet[2732]: I0716 00:00:56.086195 2732 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:00:56.086308 kubelet[2732]: I0716 00:00:56.086291 2732 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:00:56.086308 kubelet[2732]: I0716 00:00:56.086303 2732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:00:56.086409 kubelet[2732]: I0716 00:00:56.086329 2732 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:00:56.086409 kubelet[2732]: I0716 00:00:56.086339 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:00:56.089139 kubelet[2732]: I0716 00:00:56.088480 2732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:00:56.089139 kubelet[2732]: I0716 00:00:56.088847 2732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:00:56.089234 kubelet[2732]: I0716 00:00:56.089221 2732 server.go:1274] "Started kubelet" Jul 16 00:00:56.090327 kubelet[2732]: I0716 00:00:56.090266 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:00:56.090670 kubelet[2732]: I0716 00:00:56.090639 2732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:00:56.090725 kubelet[2732]: I0716 00:00:56.090701 2732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:00:56.090953 kubelet[2732]: I0716 00:00:56.090925 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:00:56.167576 kubelet[2732]: I0716 00:00:56.167521 2732 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:00:56.168448 kubelet[2732]: I0716 00:00:56.168414 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:00:56.170168 kubelet[2732]: I0716 00:00:56.170129 2732 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:00:56.170259 kubelet[2732]: E0716 00:00:56.170237 2732 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 16 00:00:56.170901 kubelet[2732]: I0716 00:00:56.170663 2732 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:00:56.170901 kubelet[2732]: I0716 00:00:56.170815 2732 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:00:56.174005 kubelet[2732]: I0716 00:00:56.173980 2732 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:00:56.174005 kubelet[2732]: I0716 00:00:56.174000 2732 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:00:56.174118 kubelet[2732]: I0716 00:00:56.174069 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:00:56.174737 kubelet[2732]: E0716 00:00:56.174702 2732 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:00:56.189058 kubelet[2732]: I0716 00:00:56.189004 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:00:56.191197 kubelet[2732]: I0716 00:00:56.191164 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:00:56.191278 kubelet[2732]: I0716 00:00:56.191208 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:00:56.191278 kubelet[2732]: I0716 00:00:56.191232 2732 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:00:56.191353 kubelet[2732]: E0716 00:00:56.191285 2732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:00:56.216608 kubelet[2732]: I0716 00:00:56.216574 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:00:56.216608 kubelet[2732]: I0716 00:00:56.216592 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:00:56.216608 kubelet[2732]: I0716 00:00:56.216614 2732 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:00:56.216813 kubelet[2732]: I0716 00:00:56.216783 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 00:00:56.216813 kubelet[2732]: I0716 00:00:56.216796 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 00:00:56.216867 kubelet[2732]: I0716 00:00:56.216817 2732 policy_none.go:49] "None policy: Start" Jul 16 00:00:56.217472 kubelet[2732]: I0716 00:00:56.217448 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:00:56.217472 kubelet[2732]: I0716 00:00:56.217472 2732 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:00:56.217627 kubelet[2732]: I0716 00:00:56.217613 2732 state_mem.go:75] "Updated machine memory state" Jul 16 00:00:56.222454 kubelet[2732]: I0716 00:00:56.222432 2732 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:00:56.222651 kubelet[2732]: I0716 00:00:56.222629 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:00:56.222981 kubelet[2732]: I0716 00:00:56.222647 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:00:56.223179 kubelet[2732]: I0716 00:00:56.223134 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:00:56.325820 kubelet[2732]: I0716 00:00:56.325779 2732 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 16 00:00:56.372274 kubelet[2732]: I0716 00:00:56.372220 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 16 00:00:56.372274 kubelet[2732]: I0716 00:00:56.372279 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:56.372534 kubelet[2732]: I0716 00:00:56.372315 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.372534 kubelet[2732]: I0716 00:00:56.372337 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.372534 kubelet[2732]: I0716 00:00:56.372362 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.372534 kubelet[2732]: I0716 00:00:56.372404 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.372534 kubelet[2732]: I0716 00:00:56.372432 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.372729 kubelet[2732]: I0716 00:00:56.372452 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:56.372729 kubelet[2732]: I0716 00:00:56.372474 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/596d20fec5e53dc8bb80e19189f01be6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"596d20fec5e53dc8bb80e19189f01be6\") " pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:56.956873 kubelet[2732]: E0716 00:00:56.956609 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:56.957536 kubelet[2732]: E0716 00:00:56.957487 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:56.961451 kubelet[2732]: I0716 00:00:56.961410 2732 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 16 00:00:56.961524 kubelet[2732]: I0716 00:00:56.961494 2732 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 16 00:00:57.086781 kubelet[2732]: I0716 00:00:57.086720 2732 apiserver.go:52] "Watching apiserver" Jul 16 00:00:57.170905 kubelet[2732]: I0716 00:00:57.170860 2732 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:00:57.370689 kubelet[2732]: I0716 00:00:57.370614 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.370589917 podStartE2EDuration="4.370589917s" podCreationTimestamp="2025-07-16 00:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:00:57.167064592 +0000 UTC m=+1.140824127" watchObservedRunningTime="2025-07-16 00:00:57.370589917 +0000 UTC m=+1.344349462" Jul 16 00:00:57.667496 kubelet[2732]: E0716 00:00:57.667338 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 16 00:00:57.668449 kubelet[2732]: E0716 00:00:57.668400 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 16 00:00:57.669262 kubelet[2732]: I0716 00:00:57.669147 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.669100624 podStartE2EDuration="3.669100624s" podCreationTimestamp="2025-07-16 00:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:00:57.669012515 +0000 UTC m=+1.642772060" watchObservedRunningTime="2025-07-16 00:00:57.669100624 +0000 UTC m=+1.642860159" Jul 16 00:00:57.669487 kubelet[2732]: I0716 00:00:57.669312 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.669272753 podStartE2EDuration="1.669272753s" podCreationTimestamp="2025-07-16 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:00:57.370707136 +0000 UTC m=+1.344466671" watchObservedRunningTime="2025-07-16 00:00:57.669272753 +0000 UTC m=+1.643032288" Jul 16 00:01:00.062033 kubelet[2732]: I0716 00:01:00.061997 2732 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 00:01:00.062526 kubelet[2732]: I0716 00:01:00.062508 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 00:01:00.062555 containerd[1557]: time="2025-07-16T00:01:00.062312292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 00:01:00.687495 systemd[1]: Created slice kubepods-besteffort-pod42988000_ac1a_45c0_b610_5c4af23c2bdd.slice - libcontainer container kubepods-besteffort-pod42988000_ac1a_45c0_b610_5c4af23c2bdd.slice. Jul 16 00:01:00.699212 kubelet[2732]: I0716 00:01:00.699161 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42988000-ac1a-45c0-b610-5c4af23c2bdd-lib-modules\") pod \"kube-proxy-qt7lm\" (UID: \"42988000-ac1a-45c0-b610-5c4af23c2bdd\") " pod="kube-system/kube-proxy-qt7lm" Jul 16 00:01:00.699212 kubelet[2732]: I0716 00:01:00.699205 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxfw\" (UniqueName: \"kubernetes.io/projected/42988000-ac1a-45c0-b610-5c4af23c2bdd-kube-api-access-sjxfw\") pod \"kube-proxy-qt7lm\" (UID: \"42988000-ac1a-45c0-b610-5c4af23c2bdd\") " pod="kube-system/kube-proxy-qt7lm" Jul 16 00:01:00.699418 kubelet[2732]: I0716 00:01:00.699229 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42988000-ac1a-45c0-b610-5c4af23c2bdd-kube-proxy\") pod \"kube-proxy-qt7lm\" (UID: \"42988000-ac1a-45c0-b610-5c4af23c2bdd\") " pod="kube-system/kube-proxy-qt7lm" Jul 16 00:01:00.699418 kubelet[2732]: I0716 00:01:00.699251 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42988000-ac1a-45c0-b610-5c4af23c2bdd-xtables-lock\") pod \"kube-proxy-qt7lm\" (UID: \"42988000-ac1a-45c0-b610-5c4af23c2bdd\") " pod="kube-system/kube-proxy-qt7lm" Jul 16 00:01:00.893259 systemd[1]: Created slice kubepods-besteffort-pod8df569bf_14cc_4389_a583_8a9b6a99017b.slice - libcontainer container kubepods-besteffort-pod8df569bf_14cc_4389_a583_8a9b6a99017b.slice. Jul 16 00:01:00.899868 kubelet[2732]: I0716 00:01:00.899840 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8df569bf-14cc-4389-a583-8a9b6a99017b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-hpg8r\" (UID: \"8df569bf-14cc-4389-a583-8a9b6a99017b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hpg8r" Jul 16 00:01:00.899868 kubelet[2732]: I0716 00:01:00.899869 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56799\" (UniqueName: \"kubernetes.io/projected/8df569bf-14cc-4389-a583-8a9b6a99017b-kube-api-access-56799\") pod \"tigera-operator-5bf8dfcb4-hpg8r\" (UID: \"8df569bf-14cc-4389-a583-8a9b6a99017b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hpg8r" Jul 16 00:01:01.001367 containerd[1557]: time="2025-07-16T00:01:01.001272321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt7lm,Uid:42988000-ac1a-45c0-b610-5c4af23c2bdd,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:01.017915 containerd[1557]: time="2025-07-16T00:01:01.017868689Z" level=info msg="connecting to shim 95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b" address="unix:///run/containerd/s/0b3aa565170be9501e146ddef104eaaa74c80e9cebb99f7c7c32f79432315d29" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:01.055661 systemd[1]: Started cri-containerd-95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b.scope - libcontainer container 95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b. Jul 16 00:01:01.078568 containerd[1557]: time="2025-07-16T00:01:01.078529856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt7lm,Uid:42988000-ac1a-45c0-b610-5c4af23c2bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b\"" Jul 16 00:01:01.080849 containerd[1557]: time="2025-07-16T00:01:01.080822788Z" level=info msg="CreateContainer within sandbox \"95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 00:01:01.091961 containerd[1557]: time="2025-07-16T00:01:01.091913687Z" level=info msg="Container 5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:01.099831 containerd[1557]: time="2025-07-16T00:01:01.099796849Z" level=info msg="CreateContainer within sandbox \"95e14fc354b75beb32c70db38249be4a95b1ce21f091c8c9c1bd61d2b9a6fa3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6\"" Jul 16 00:01:01.100282 containerd[1557]: time="2025-07-16T00:01:01.100258465Z" level=info msg="StartContainer for \"5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6\"" Jul 16 00:01:01.103714 containerd[1557]: time="2025-07-16T00:01:01.103667287Z" level=info msg="connecting to shim 5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6" address="unix:///run/containerd/s/0b3aa565170be9501e146ddef104eaaa74c80e9cebb99f7c7c32f79432315d29" protocol=ttrpc version=3 Jul 16 00:01:01.126527 systemd[1]: Started cri-containerd-5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6.scope - libcontainer container 5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6. Jul 16 00:01:01.167659 containerd[1557]: time="2025-07-16T00:01:01.167619147Z" level=info msg="StartContainer for \"5a15d9cee9a52d51726106660f7a6d9564428c08fc58eda1dd698907ae04a0a6\" returns successfully" Jul 16 00:01:01.197516 containerd[1557]: time="2025-07-16T00:01:01.197467442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hpg8r,Uid:8df569bf-14cc-4389-a583-8a9b6a99017b,Namespace:tigera-operator,Attempt:0,}" Jul 16 00:01:01.221132 containerd[1557]: time="2025-07-16T00:01:01.221085955Z" level=info msg="connecting to shim dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794" address="unix:///run/containerd/s/7cc3002deb8a903d433b5d3b81c300c1225a59a156487a8815b5b876c7616332" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:01.244533 systemd[1]: Started cri-containerd-dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794.scope - libcontainer container dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794. Jul 16 00:01:01.291676 containerd[1557]: time="2025-07-16T00:01:01.291632541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hpg8r,Uid:8df569bf-14cc-4389-a583-8a9b6a99017b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794\"" Jul 16 00:01:01.292980 containerd[1557]: time="2025-07-16T00:01:01.292935445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 00:01:01.810566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402020668.mount: Deactivated successfully. Jul 16 00:01:02.190014 kubelet[2732]: I0716 00:01:02.189857 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qt7lm" podStartSLOduration=2.189839585 podStartE2EDuration="2.189839585s" podCreationTimestamp="2025-07-16 00:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:01:01.21858264 +0000 UTC m=+5.192342176" watchObservedRunningTime="2025-07-16 00:01:02.189839585 +0000 UTC m=+6.163599120" Jul 16 00:01:03.506276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027075772.mount: Deactivated successfully. Jul 16 00:01:04.046274 containerd[1557]: time="2025-07-16T00:01:04.046207882Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:04.046943 containerd[1557]: time="2025-07-16T00:01:04.046885741Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 16 00:01:04.048141 containerd[1557]: time="2025-07-16T00:01:04.048101622Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:04.050067 containerd[1557]: time="2025-07-16T00:01:04.050030313Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:04.050620 containerd[1557]: time="2025-07-16T00:01:04.050582351Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.757619852s" Jul 16 00:01:04.050666 containerd[1557]: time="2025-07-16T00:01:04.050621981Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 16 00:01:04.052609 containerd[1557]: time="2025-07-16T00:01:04.052581522Z" level=info msg="CreateContainer within sandbox \"dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 00:01:04.061425 containerd[1557]: time="2025-07-16T00:01:04.061364609Z" level=info msg="Container b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:04.064890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949828290.mount: Deactivated successfully. Jul 16 00:01:04.068272 containerd[1557]: time="2025-07-16T00:01:04.068233204Z" level=info msg="CreateContainer within sandbox \"dd0427916039cf82cd5e2a9d7f775710f9cbf9940fcaaf5e35678ea1b5c64794\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205\"" Jul 16 00:01:04.068892 containerd[1557]: time="2025-07-16T00:01:04.068691366Z" level=info msg="StartContainer for \"b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205\"" Jul 16 00:01:04.069418 containerd[1557]: time="2025-07-16T00:01:04.069390778Z" level=info msg="connecting to shim b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205" address="unix:///run/containerd/s/7cc3002deb8a903d433b5d3b81c300c1225a59a156487a8815b5b876c7616332" protocol=ttrpc version=3 Jul 16 00:01:04.120712 systemd[1]: Started cri-containerd-b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205.scope - libcontainer container b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205. Jul 16 00:01:04.152030 containerd[1557]: time="2025-07-16T00:01:04.151988699Z" level=info msg="StartContainer for \"b374286681524b1fdbe0b771eea746d13920cbc9488b99a9dc8aa8016bff4205\" returns successfully" Jul 16 00:01:04.224926 kubelet[2732]: I0716 00:01:04.224863 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-hpg8r" podStartSLOduration=1.4659812190000001 podStartE2EDuration="4.224844643s" podCreationTimestamp="2025-07-16 00:01:00 +0000 UTC" firstStartedPulling="2025-07-16 00:01:01.292584641 +0000 UTC m=+5.266344176" lastFinishedPulling="2025-07-16 00:01:04.051448065 +0000 UTC m=+8.025207600" observedRunningTime="2025-07-16 00:01:04.224809313 +0000 UTC m=+8.198568908" watchObservedRunningTime="2025-07-16 00:01:04.224844643 +0000 UTC m=+8.198604178" Jul 16 00:01:09.278916 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 16 00:01:09.280541 sshd[1767]: Connection closed by 10.0.0.1 port 60154 Jul 16 00:01:09.281746 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:09.285576 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:60154.service: Deactivated successfully. Jul 16 00:01:09.288795 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 00:01:09.289300 systemd[1]: session-7.scope: Consumed 5.827s CPU time, 222.1M memory peak. Jul 16 00:01:09.294188 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Jul 16 00:01:09.295305 systemd-logind[1537]: Removed session 7. Jul 16 00:01:11.603495 systemd[1]: Created slice kubepods-besteffort-pod4b8d51ff_e6a7_4e6f_99ab_1a7884e6e0e7.slice - libcontainer container kubepods-besteffort-pod4b8d51ff_e6a7_4e6f_99ab_1a7884e6e0e7.slice. Jul 16 00:01:11.666851 kubelet[2732]: I0716 00:01:11.666779 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7-tigera-ca-bundle\") pod \"calico-typha-6bf46644d8-4sqh4\" (UID: \"4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7\") " pod="calico-system/calico-typha-6bf46644d8-4sqh4" Jul 16 00:01:11.666851 kubelet[2732]: I0716 00:01:11.666830 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7-typha-certs\") pod \"calico-typha-6bf46644d8-4sqh4\" (UID: \"4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7\") " pod="calico-system/calico-typha-6bf46644d8-4sqh4" Jul 16 00:01:11.666851 kubelet[2732]: I0716 00:01:11.666846 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvj45\" (UniqueName: \"kubernetes.io/projected/4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7-kube-api-access-dvj45\") pod \"calico-typha-6bf46644d8-4sqh4\" (UID: \"4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7\") " pod="calico-system/calico-typha-6bf46644d8-4sqh4" Jul 16 00:01:11.905993 systemd[1]: Created slice kubepods-besteffort-pode44b89b9_5cbc_4c02_ae18_81e73ad83fdc.slice - libcontainer container kubepods-besteffort-pode44b89b9_5cbc_4c02_ae18_81e73ad83fdc.slice. Jul 16 00:01:11.907786 containerd[1557]: time="2025-07-16T00:01:11.907702180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf46644d8-4sqh4,Uid:4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:11.969345 kubelet[2732]: I0716 00:01:11.969293 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-cni-bin-dir\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969345 kubelet[2732]: I0716 00:01:11.969359 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-tigera-ca-bundle\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969549 kubelet[2732]: I0716 00:01:11.969401 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276pp\" (UniqueName: \"kubernetes.io/projected/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-kube-api-access-276pp\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969549 kubelet[2732]: I0716 00:01:11.969422 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-lib-modules\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969549 kubelet[2732]: I0716 00:01:11.969440 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-flexvol-driver-host\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969549 kubelet[2732]: I0716 00:01:11.969453 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-cni-net-dir\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969549 kubelet[2732]: I0716 00:01:11.969466 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-var-lib-calico\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969670 kubelet[2732]: I0716 00:01:11.969482 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-node-certs\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969670 kubelet[2732]: I0716 00:01:11.969495 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-var-run-calico\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969670 kubelet[2732]: I0716 00:01:11.969537 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-xtables-lock\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969670 kubelet[2732]: I0716 00:01:11.969606 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-cni-log-dir\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:11.969670 kubelet[2732]: I0716 00:01:11.969645 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e44b89b9-5cbc-4c02-ae18-81e73ad83fdc-policysync\") pod \"calico-node-npdx2\" (UID: \"e44b89b9-5cbc-4c02-ae18-81e73ad83fdc\") " pod="calico-system/calico-node-npdx2" Jul 16 00:01:12.044935 containerd[1557]: time="2025-07-16T00:01:12.044881152Z" level=info msg="connecting to shim cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722" address="unix:///run/containerd/s/19741ff2add9382798dff0645eac58ca36622d8a7b469feb29a95e1e334331fb" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076301 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.076369 kubelet[2732]: W0716 00:01:12.076349 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076454 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076748 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.076369 kubelet[2732]: W0716 00:01:12.076756 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076774 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076955 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.076369 kubelet[2732]: W0716 00:01:12.076969 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.076980 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.076369 kubelet[2732]: E0716 00:01:12.077152 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.079759 kubelet[2732]: W0716 00:01:12.077159 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.077167 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.077340 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.079759 kubelet[2732]: W0716 00:01:12.077347 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.077366 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.078919 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.079759 kubelet[2732]: W0716 00:01:12.078957 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.078970 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.079759 kubelet[2732]: E0716 00:01:12.079740 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.079759 kubelet[2732]: W0716 00:01:12.079750 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.076581 systemd[1]: Started cri-containerd-cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722.scope - libcontainer container cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722. Jul 16 00:01:12.080651 kubelet[2732]: E0716 00:01:12.080264 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.082591 kubelet[2732]: E0716 00:01:12.082563 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.082851 kubelet[2732]: W0716 00:01:12.082670 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.082851 kubelet[2732]: E0716 00:01:12.082691 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.084761 kubelet[2732]: E0716 00:01:12.084675 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.084761 kubelet[2732]: W0716 00:01:12.084695 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.084761 kubelet[2732]: E0716 00:01:12.084716 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.125667 containerd[1557]: time="2025-07-16T00:01:12.125623396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf46644d8-4sqh4,Uid:4b8d51ff-e6a7-4e6f-99ab-1a7884e6e0e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722\"" Jul 16 00:01:12.127502 containerd[1557]: time="2025-07-16T00:01:12.127426863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 00:01:12.198755 kubelet[2732]: E0716 00:01:12.198545 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:12.210471 containerd[1557]: time="2025-07-16T00:01:12.210420602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npdx2,Uid:e44b89b9-5cbc-4c02-ae18-81e73ad83fdc,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:12.232349 containerd[1557]: time="2025-07-16T00:01:12.232294871Z" level=info msg="connecting to shim f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836" address="unix:///run/containerd/s/22d320c2ff9abaac6581641870a60ef1bf5d6f888c71aba934cca9995ba6a387" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:12.258538 systemd[1]: Started cri-containerd-f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836.scope - libcontainer container f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836. Jul 16 00:01:12.265669 kubelet[2732]: E0716 00:01:12.265636 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.265669 kubelet[2732]: W0716 00:01:12.265656 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.265669 kubelet[2732]: E0716 00:01:12.265675 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.265916 kubelet[2732]: E0716 00:01:12.265900 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.265916 kubelet[2732]: W0716 00:01:12.265910 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.265975 kubelet[2732]: E0716 00:01:12.265919 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.266104 kubelet[2732]: E0716 00:01:12.266089 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.266104 kubelet[2732]: W0716 00:01:12.266099 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.266161 kubelet[2732]: E0716 00:01:12.266107 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.266287 kubelet[2732]: E0716 00:01:12.266272 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.266287 kubelet[2732]: W0716 00:01:12.266282 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.266344 kubelet[2732]: E0716 00:01:12.266290 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.266492 kubelet[2732]: E0716 00:01:12.266476 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.266492 kubelet[2732]: W0716 00:01:12.266489 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.266558 kubelet[2732]: E0716 00:01:12.266497 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.266682 kubelet[2732]: E0716 00:01:12.266660 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.266682 kubelet[2732]: W0716 00:01:12.266671 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.266682 kubelet[2732]: E0716 00:01:12.266678 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.266856 kubelet[2732]: E0716 00:01:12.266835 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.266856 kubelet[2732]: W0716 00:01:12.266851 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.266909 kubelet[2732]: E0716 00:01:12.266859 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267036 kubelet[2732]: E0716 00:01:12.267021 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267036 kubelet[2732]: W0716 00:01:12.267031 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.267105 kubelet[2732]: E0716 00:01:12.267039 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267227 kubelet[2732]: E0716 00:01:12.267211 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267227 kubelet[2732]: W0716 00:01:12.267223 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.267309 kubelet[2732]: E0716 00:01:12.267231 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267435 kubelet[2732]: E0716 00:01:12.267417 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267435 kubelet[2732]: W0716 00:01:12.267429 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.267503 kubelet[2732]: E0716 00:01:12.267438 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267623 kubelet[2732]: E0716 00:01:12.267599 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267623 kubelet[2732]: W0716 00:01:12.267611 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.267623 kubelet[2732]: E0716 00:01:12.267620 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267790 kubelet[2732]: E0716 00:01:12.267774 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267790 kubelet[2732]: W0716 00:01:12.267787 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.267869 kubelet[2732]: E0716 00:01:12.267809 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.267987 kubelet[2732]: E0716 00:01:12.267970 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.267987 kubelet[2732]: W0716 00:01:12.267981 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268051 kubelet[2732]: E0716 00:01:12.267990 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.268171 kubelet[2732]: E0716 00:01:12.268138 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.268171 kubelet[2732]: W0716 00:01:12.268148 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268171 kubelet[2732]: E0716 00:01:12.268157 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.268337 kubelet[2732]: E0716 00:01:12.268304 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.268337 kubelet[2732]: W0716 00:01:12.268314 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268337 kubelet[2732]: E0716 00:01:12.268324 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.268525 kubelet[2732]: E0716 00:01:12.268504 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.268525 kubelet[2732]: W0716 00:01:12.268514 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268525 kubelet[2732]: E0716 00:01:12.268522 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.268708 kubelet[2732]: E0716 00:01:12.268692 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.268708 kubelet[2732]: W0716 00:01:12.268702 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268708 kubelet[2732]: E0716 00:01:12.268710 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.268915 kubelet[2732]: E0716 00:01:12.268899 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.268915 kubelet[2732]: W0716 00:01:12.268909 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.268915 kubelet[2732]: E0716 00:01:12.268916 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.269468 kubelet[2732]: E0716 00:01:12.269443 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.269468 kubelet[2732]: W0716 00:01:12.269459 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.269468 kubelet[2732]: E0716 00:01:12.269469 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.269662 kubelet[2732]: E0716 00:01:12.269643 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.269662 kubelet[2732]: W0716 00:01:12.269654 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.269662 kubelet[2732]: E0716 00:01:12.269663 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.272283 kubelet[2732]: E0716 00:01:12.272246 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.272283 kubelet[2732]: W0716 00:01:12.272274 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.272348 kubelet[2732]: E0716 00:01:12.272300 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.272348 kubelet[2732]: I0716 00:01:12.272340 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b970f5-c85f-469e-a5f5-523ed3eaf527-kubelet-dir\") pod \"csi-node-driver-wwvbq\" (UID: \"c4b970f5-c85f-469e-a5f5-523ed3eaf527\") " pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:12.272634 kubelet[2732]: E0716 00:01:12.272614 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.272634 kubelet[2732]: W0716 00:01:12.272627 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.272702 kubelet[2732]: E0716 00:01:12.272643 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.272702 kubelet[2732]: I0716 00:01:12.272657 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4b970f5-c85f-469e-a5f5-523ed3eaf527-registration-dir\") pod \"csi-node-driver-wwvbq\" (UID: \"c4b970f5-c85f-469e-a5f5-523ed3eaf527\") " pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:12.272921 kubelet[2732]: E0716 00:01:12.272889 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.272921 kubelet[2732]: W0716 00:01:12.272902 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.272921 kubelet[2732]: E0716 00:01:12.272922 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.272998 kubelet[2732]: I0716 00:01:12.272942 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75w6w\" (UniqueName: \"kubernetes.io/projected/c4b970f5-c85f-469e-a5f5-523ed3eaf527-kube-api-access-75w6w\") pod \"csi-node-driver-wwvbq\" (UID: \"c4b970f5-c85f-469e-a5f5-523ed3eaf527\") " pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:12.273209 kubelet[2732]: E0716 00:01:12.273192 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.273209 kubelet[2732]: W0716 00:01:12.273204 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.273266 kubelet[2732]: E0716 00:01:12.273225 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.273266 kubelet[2732]: I0716 00:01:12.273238 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4b970f5-c85f-469e-a5f5-523ed3eaf527-varrun\") pod \"csi-node-driver-wwvbq\" (UID: \"c4b970f5-c85f-469e-a5f5-523ed3eaf527\") " pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:12.273660 kubelet[2732]: E0716 00:01:12.273617 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.273660 kubelet[2732]: W0716 00:01:12.273630 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.273660 kubelet[2732]: E0716 00:01:12.273645 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.273660 kubelet[2732]: I0716 00:01:12.273658 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4b970f5-c85f-469e-a5f5-523ed3eaf527-socket-dir\") pod \"csi-node-driver-wwvbq\" (UID: \"c4b970f5-c85f-469e-a5f5-523ed3eaf527\") " pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:12.274416 kubelet[2732]: E0716 00:01:12.273968 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.274416 kubelet[2732]: W0716 00:01:12.273993 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.274416 kubelet[2732]: E0716 00:01:12.274027 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.274416 kubelet[2732]: E0716 00:01:12.274276 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.274416 kubelet[2732]: W0716 00:01:12.274285 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.274416 kubelet[2732]: E0716 00:01:12.274313 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.274579 kubelet[2732]: E0716 00:01:12.274527 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.274579 kubelet[2732]: W0716 00:01:12.274536 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.274579 kubelet[2732]: E0716 00:01:12.274560 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.274868 kubelet[2732]: E0716 00:01:12.274752 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.274868 kubelet[2732]: W0716 00:01:12.274764 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.274868 kubelet[2732]: E0716 00:01:12.274788 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.274982 kubelet[2732]: E0716 00:01:12.274966 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.274982 kubelet[2732]: W0716 00:01:12.274976 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.275035 kubelet[2732]: E0716 00:01:12.274999 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.275693 kubelet[2732]: E0716 00:01:12.275247 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.275693 kubelet[2732]: W0716 00:01:12.275691 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.275750 kubelet[2732]: E0716 00:01:12.275703 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.276479 kubelet[2732]: E0716 00:01:12.276461 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.276479 kubelet[2732]: W0716 00:01:12.276473 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.276563 kubelet[2732]: E0716 00:01:12.276483 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.278856 kubelet[2732]: E0716 00:01:12.278729 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.278856 kubelet[2732]: W0716 00:01:12.278744 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.278856 kubelet[2732]: E0716 00:01:12.278754 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.279198 kubelet[2732]: E0716 00:01:12.278942 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.279198 kubelet[2732]: W0716 00:01:12.278949 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.279198 kubelet[2732]: E0716 00:01:12.278957 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.279198 kubelet[2732]: E0716 00:01:12.279109 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.279198 kubelet[2732]: W0716 00:01:12.279116 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.279198 kubelet[2732]: E0716 00:01:12.279123 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.291536 containerd[1557]: time="2025-07-16T00:01:12.291498644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-npdx2,Uid:e44b89b9-5cbc-4c02-ae18-81e73ad83fdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\"" Jul 16 00:01:12.374130 kubelet[2732]: E0716 00:01:12.374094 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.374130 kubelet[2732]: W0716 00:01:12.374116 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.374130 kubelet[2732]: E0716 00:01:12.374138 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.374449 kubelet[2732]: E0716 00:01:12.374418 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.374496 kubelet[2732]: W0716 00:01:12.374448 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.374496 kubelet[2732]: E0716 00:01:12.374485 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.374800 kubelet[2732]: E0716 00:01:12.374785 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.374800 kubelet[2732]: W0716 00:01:12.374795 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.374864 kubelet[2732]: E0716 00:01:12.374810 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.375072 kubelet[2732]: E0716 00:01:12.375056 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.375072 kubelet[2732]: W0716 00:01:12.375068 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.375072 kubelet[2732]: E0716 00:01:12.375082 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.375309 kubelet[2732]: E0716 00:01:12.375292 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.375309 kubelet[2732]: W0716 00:01:12.375306 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.375403 kubelet[2732]: E0716 00:01:12.375321 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.375722 kubelet[2732]: E0716 00:01:12.375647 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.375722 kubelet[2732]: W0716 00:01:12.375683 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.375870 kubelet[2732]: E0716 00:01:12.375831 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.376187 kubelet[2732]: E0716 00:01:12.376139 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.376187 kubelet[2732]: W0716 00:01:12.376154 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.376271 kubelet[2732]: E0716 00:01:12.376250 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.376676 kubelet[2732]: E0716 00:01:12.376639 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.376676 kubelet[2732]: W0716 00:01:12.376657 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.376947 kubelet[2732]: E0716 00:01:12.376847 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.376947 kubelet[2732]: E0716 00:01:12.376931 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.376947 kubelet[2732]: W0716 00:01:12.376938 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.377042 kubelet[2732]: E0716 00:01:12.377001 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.377155 kubelet[2732]: E0716 00:01:12.377132 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.377155 kubelet[2732]: W0716 00:01:12.377144 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.377236 kubelet[2732]: E0716 00:01:12.377202 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.377321 kubelet[2732]: E0716 00:01:12.377304 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.377321 kubelet[2732]: W0716 00:01:12.377315 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.377454 kubelet[2732]: E0716 00:01:12.377368 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.377665 kubelet[2732]: E0716 00:01:12.377628 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.377665 kubelet[2732]: W0716 00:01:12.377642 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.377665 kubelet[2732]: E0716 00:01:12.377663 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.377910 kubelet[2732]: E0716 00:01:12.377882 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.377910 kubelet[2732]: W0716 00:01:12.377895 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.377910 kubelet[2732]: E0716 00:01:12.377909 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.378277 kubelet[2732]: E0716 00:01:12.378142 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.378277 kubelet[2732]: W0716 00:01:12.378157 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.378277 kubelet[2732]: E0716 00:01:12.378228 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.378737 kubelet[2732]: E0716 00:01:12.378570 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.378737 kubelet[2732]: W0716 00:01:12.378581 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.378737 kubelet[2732]: E0716 00:01:12.378645 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.378835 kubelet[2732]: E0716 00:01:12.378812 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.378835 kubelet[2732]: W0716 00:01:12.378824 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.378939 kubelet[2732]: E0716 00:01:12.378888 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.379000 kubelet[2732]: E0716 00:01:12.378988 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.379000 kubelet[2732]: W0716 00:01:12.378997 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.379153 kubelet[2732]: E0716 00:01:12.379105 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.379449 kubelet[2732]: E0716 00:01:12.379230 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.379449 kubelet[2732]: W0716 00:01:12.379256 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.379449 kubelet[2732]: E0716 00:01:12.379268 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.379606 kubelet[2732]: E0716 00:01:12.379584 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.379606 kubelet[2732]: W0716 00:01:12.379597 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.379672 kubelet[2732]: E0716 00:01:12.379615 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.379899 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.380433 kubelet[2732]: W0716 00:01:12.379913 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.380030 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.380115 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.380433 kubelet[2732]: W0716 00:01:12.380124 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.380137 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.380325 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.380433 kubelet[2732]: W0716 00:01:12.380335 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.380433 kubelet[2732]: E0716 00:01:12.380438 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.380670 kubelet[2732]: E0716 00:01:12.380566 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.380670 kubelet[2732]: W0716 00:01:12.380575 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.380670 kubelet[2732]: E0716 00:01:12.380599 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.380905 kubelet[2732]: E0716 00:01:12.380865 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.380905 kubelet[2732]: W0716 00:01:12.380878 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.380905 kubelet[2732]: E0716 00:01:12.380894 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.381225 kubelet[2732]: E0716 00:01:12.381201 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.381225 kubelet[2732]: W0716 00:01:12.381216 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.381225 kubelet[2732]: E0716 00:01:12.381226 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:12.388470 kubelet[2732]: E0716 00:01:12.388446 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:12.388470 kubelet[2732]: W0716 00:01:12.388460 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:12.388470 kubelet[2732]: E0716 00:01:12.388472 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:13.898896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710217930.mount: Deactivated successfully. Jul 16 00:01:14.191829 kubelet[2732]: E0716 00:01:14.191679 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:14.261551 containerd[1557]: time="2025-07-16T00:01:14.261470557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:14.262580 containerd[1557]: time="2025-07-16T00:01:14.262532604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 16 00:01:14.263652 containerd[1557]: time="2025-07-16T00:01:14.263623837Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:14.265504 containerd[1557]: time="2025-07-16T00:01:14.265454367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:14.265948 containerd[1557]: time="2025-07-16T00:01:14.265922272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.138456764s" Jul 16 00:01:14.265994 containerd[1557]: time="2025-07-16T00:01:14.265952181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 16 00:01:14.267429 containerd[1557]: time="2025-07-16T00:01:14.266884373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 00:01:14.277006 containerd[1557]: time="2025-07-16T00:01:14.276964043Z" level=info msg="CreateContainer within sandbox \"cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 00:01:14.285050 containerd[1557]: time="2025-07-16T00:01:14.284998702Z" level=info msg="Container 15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:14.293734 containerd[1557]: time="2025-07-16T00:01:14.293673845Z" level=info msg="CreateContainer within sandbox \"cb30c0da3361851a6a187e3f286a535b590e8589445ff7f2b6f09a511fc25722\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8\"" Jul 16 00:01:14.294437 containerd[1557]: time="2025-07-16T00:01:14.294247617Z" level=info msg="StartContainer for \"15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8\"" Jul 16 00:01:14.295810 containerd[1557]: time="2025-07-16T00:01:14.295764322Z" level=info msg="connecting to shim 15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8" address="unix:///run/containerd/s/19741ff2add9382798dff0645eac58ca36622d8a7b469feb29a95e1e334331fb" protocol=ttrpc version=3 Jul 16 00:01:14.321533 systemd[1]: Started cri-containerd-15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8.scope - libcontainer container 15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8. Jul 16 00:01:14.501038 containerd[1557]: time="2025-07-16T00:01:14.500859319Z" level=info msg="StartContainer for \"15ec1679732f25b9ebd0002f726026688b109eae524d852d279ec1ec9e95d9c8\" returns successfully" Jul 16 00:01:15.251433 kubelet[2732]: I0716 00:01:15.251318 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bf46644d8-4sqh4" podStartSLOduration=2.111680418 podStartE2EDuration="4.251299967s" podCreationTimestamp="2025-07-16 00:01:11 +0000 UTC" firstStartedPulling="2025-07-16 00:01:12.127127586 +0000 UTC m=+16.100887121" lastFinishedPulling="2025-07-16 00:01:14.266747135 +0000 UTC m=+18.240506670" observedRunningTime="2025-07-16 00:01:15.251056842 +0000 UTC m=+19.224816377" watchObservedRunningTime="2025-07-16 00:01:15.251299967 +0000 UTC m=+19.225059492" Jul 16 00:01:15.292781 kubelet[2732]: E0716 00:01:15.292732 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.292781 kubelet[2732]: W0716 00:01:15.292756 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.292781 kubelet[2732]: E0716 00:01:15.292778 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293065 kubelet[2732]: E0716 00:01:15.292962 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293065 kubelet[2732]: W0716 00:01:15.292973 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293065 kubelet[2732]: E0716 00:01:15.292994 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293194 kubelet[2732]: E0716 00:01:15.293169 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293194 kubelet[2732]: W0716 00:01:15.293182 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293194 kubelet[2732]: E0716 00:01:15.293192 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293386 kubelet[2732]: E0716 00:01:15.293357 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293432 kubelet[2732]: W0716 00:01:15.293370 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293432 kubelet[2732]: E0716 00:01:15.293412 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293616 kubelet[2732]: E0716 00:01:15.293591 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293616 kubelet[2732]: W0716 00:01:15.293604 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293616 kubelet[2732]: E0716 00:01:15.293614 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293796 kubelet[2732]: E0716 00:01:15.293770 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293796 kubelet[2732]: W0716 00:01:15.293783 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293796 kubelet[2732]: E0716 00:01:15.293792 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.293973 kubelet[2732]: E0716 00:01:15.293950 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.293973 kubelet[2732]: W0716 00:01:15.293963 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.293973 kubelet[2732]: E0716 00:01:15.293972 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.294164 kubelet[2732]: E0716 00:01:15.294141 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.294164 kubelet[2732]: W0716 00:01:15.294154 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.294164 kubelet[2732]: E0716 00:01:15.294163 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.294391 kubelet[2732]: E0716 00:01:15.294361 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.294444 kubelet[2732]: W0716 00:01:15.294396 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.294444 kubelet[2732]: E0716 00:01:15.294409 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.294609 kubelet[2732]: E0716 00:01:15.294583 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.294609 kubelet[2732]: W0716 00:01:15.294596 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.294609 kubelet[2732]: E0716 00:01:15.294606 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.294808 kubelet[2732]: E0716 00:01:15.294784 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.294808 kubelet[2732]: W0716 00:01:15.294805 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.294897 kubelet[2732]: E0716 00:01:15.294815 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.295004 kubelet[2732]: E0716 00:01:15.294980 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.295004 kubelet[2732]: W0716 00:01:15.294993 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.295004 kubelet[2732]: E0716 00:01:15.295002 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.295198 kubelet[2732]: E0716 00:01:15.295169 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.295198 kubelet[2732]: W0716 00:01:15.295184 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.295198 kubelet[2732]: E0716 00:01:15.295195 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.295408 kubelet[2732]: E0716 00:01:15.295371 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.295408 kubelet[2732]: W0716 00:01:15.295397 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.295408 kubelet[2732]: E0716 00:01:15.295407 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.295592 kubelet[2732]: E0716 00:01:15.295569 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.295631 kubelet[2732]: W0716 00:01:15.295581 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.295631 kubelet[2732]: E0716 00:01:15.295608 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.296882 kubelet[2732]: E0716 00:01:15.296857 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.296882 kubelet[2732]: W0716 00:01:15.296872 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.296882 kubelet[2732]: E0716 00:01:15.296883 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.297260 kubelet[2732]: E0716 00:01:15.297210 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.297260 kubelet[2732]: W0716 00:01:15.297247 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.297482 kubelet[2732]: E0716 00:01:15.297295 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.297628 kubelet[2732]: E0716 00:01:15.297609 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.297628 kubelet[2732]: W0716 00:01:15.297625 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.297691 kubelet[2732]: E0716 00:01:15.297641 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.297842 kubelet[2732]: E0716 00:01:15.297829 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.297869 kubelet[2732]: W0716 00:01:15.297841 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.297898 kubelet[2732]: E0716 00:01:15.297866 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.298098 kubelet[2732]: E0716 00:01:15.298085 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.298098 kubelet[2732]: W0716 00:01:15.298096 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.298149 kubelet[2732]: E0716 00:01:15.298111 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.298315 kubelet[2732]: E0716 00:01:15.298302 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.298315 kubelet[2732]: W0716 00:01:15.298313 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.298362 kubelet[2732]: E0716 00:01:15.298327 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.298574 kubelet[2732]: E0716 00:01:15.298561 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.298574 kubelet[2732]: W0716 00:01:15.298571 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.298629 kubelet[2732]: E0716 00:01:15.298585 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.298798 kubelet[2732]: E0716 00:01:15.298786 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.298798 kubelet[2732]: W0716 00:01:15.298796 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.298866 kubelet[2732]: E0716 00:01:15.298836 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.298989 kubelet[2732]: E0716 00:01:15.298975 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.298989 kubelet[2732]: W0716 00:01:15.298987 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.299042 kubelet[2732]: E0716 00:01:15.299029 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.299195 kubelet[2732]: E0716 00:01:15.299182 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.299195 kubelet[2732]: W0716 00:01:15.299193 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.299247 kubelet[2732]: E0716 00:01:15.299215 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.299480 kubelet[2732]: E0716 00:01:15.299467 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.299480 kubelet[2732]: W0716 00:01:15.299477 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.299551 kubelet[2732]: E0716 00:01:15.299490 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.299743 kubelet[2732]: E0716 00:01:15.299731 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.299743 kubelet[2732]: W0716 00:01:15.299741 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.299796 kubelet[2732]: E0716 00:01:15.299755 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.300043 kubelet[2732]: E0716 00:01:15.300028 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.300043 kubelet[2732]: W0716 00:01:15.300040 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.300099 kubelet[2732]: E0716 00:01:15.300066 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.300272 kubelet[2732]: E0716 00:01:15.300257 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.300272 kubelet[2732]: W0716 00:01:15.300269 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.300333 kubelet[2732]: E0716 00:01:15.300286 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.300518 kubelet[2732]: E0716 00:01:15.300506 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.300518 kubelet[2732]: W0716 00:01:15.300516 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.300591 kubelet[2732]: E0716 00:01:15.300556 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.300740 kubelet[2732]: E0716 00:01:15.300727 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.300771 kubelet[2732]: W0716 00:01:15.300738 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.300919 kubelet[2732]: E0716 00:01:15.300783 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.300961 kubelet[2732]: E0716 00:01:15.300954 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.300984 kubelet[2732]: W0716 00:01:15.300963 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.300984 kubelet[2732]: E0716 00:01:15.300974 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.301336 kubelet[2732]: E0716 00:01:15.301308 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:01:15.301336 kubelet[2732]: W0716 00:01:15.301323 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:01:15.301406 kubelet[2732]: E0716 00:01:15.301335 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:01:15.731067 containerd[1557]: time="2025-07-16T00:01:15.731012780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:15.731721 containerd[1557]: time="2025-07-16T00:01:15.731697637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 16 00:01:15.732862 containerd[1557]: time="2025-07-16T00:01:15.732835690Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:15.734932 containerd[1557]: time="2025-07-16T00:01:15.734891514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:15.735388 containerd[1557]: time="2025-07-16T00:01:15.735351342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.468442162s" Jul 16 00:01:15.735455 containerd[1557]: time="2025-07-16T00:01:15.735396451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 16 00:01:15.737087 containerd[1557]: time="2025-07-16T00:01:15.737046583Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 00:01:15.744449 containerd[1557]: time="2025-07-16T00:01:15.744410143Z" level=info msg="Container db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:15.753612 containerd[1557]: time="2025-07-16T00:01:15.753570141Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\"" Jul 16 00:01:15.754047 containerd[1557]: time="2025-07-16T00:01:15.754016162Z" level=info msg="StartContainer for \"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\"" Jul 16 00:01:15.755300 containerd[1557]: time="2025-07-16T00:01:15.755274771Z" level=info msg="connecting to shim db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d" address="unix:///run/containerd/s/22d320c2ff9abaac6581641870a60ef1bf5d6f888c71aba934cca9995ba6a387" protocol=ttrpc version=3 Jul 16 00:01:15.780545 systemd[1]: Started cri-containerd-db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d.scope - libcontainer container db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d. Jul 16 00:01:15.824260 containerd[1557]: time="2025-07-16T00:01:15.824220462Z" level=info msg="StartContainer for \"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\" returns successfully" Jul 16 00:01:15.832651 systemd[1]: cri-containerd-db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d.scope: Deactivated successfully. Jul 16 00:01:15.834239 containerd[1557]: time="2025-07-16T00:01:15.834207144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\" id:\"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\" pid:3413 exited_at:{seconds:1752624075 nanos:833808947}" Jul 16 00:01:15.834349 containerd[1557]: time="2025-07-16T00:01:15.834253475Z" level=info msg="received exit event container_id:\"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\" id:\"db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d\" pid:3413 exited_at:{seconds:1752624075 nanos:833808947}" Jul 16 00:01:15.857286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db6bf27e88eecd24f2eefc4b3256c5e176ee9baa732f0cb682e7b0d4e0a14f2d-rootfs.mount: Deactivated successfully. Jul 16 00:01:16.192319 kubelet[2732]: E0716 00:01:16.191829 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:16.242424 kubelet[2732]: I0716 00:01:16.242362 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:01:17.246351 containerd[1557]: time="2025-07-16T00:01:17.246304279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 00:01:18.191850 kubelet[2732]: E0716 00:01:18.191777 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:19.807844 containerd[1557]: time="2025-07-16T00:01:19.807776097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:19.832275 containerd[1557]: time="2025-07-16T00:01:19.832191714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 16 00:01:19.835149 containerd[1557]: time="2025-07-16T00:01:19.835097093Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:19.841045 containerd[1557]: time="2025-07-16T00:01:19.840998288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:19.841881 containerd[1557]: time="2025-07-16T00:01:19.841836778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.595492212s" Jul 16 00:01:19.841920 containerd[1557]: time="2025-07-16T00:01:19.841883469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 16 00:01:19.844233 containerd[1557]: time="2025-07-16T00:01:19.844191056Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 00:01:19.860073 containerd[1557]: time="2025-07-16T00:01:19.860024955Z" level=info msg="Container b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:19.870794 containerd[1557]: time="2025-07-16T00:01:19.870715993Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\"" Jul 16 00:01:19.871284 containerd[1557]: time="2025-07-16T00:01:19.871234962Z" level=info msg="StartContainer for \"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\"" Jul 16 00:01:19.872678 containerd[1557]: time="2025-07-16T00:01:19.872651156Z" level=info msg="connecting to shim b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8" address="unix:///run/containerd/s/22d320c2ff9abaac6581641870a60ef1bf5d6f888c71aba934cca9995ba6a387" protocol=ttrpc version=3 Jul 16 00:01:19.894544 systemd[1]: Started cri-containerd-b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8.scope - libcontainer container b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8. Jul 16 00:01:19.944331 containerd[1557]: time="2025-07-16T00:01:19.944276679Z" level=info msg="StartContainer for \"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\" returns successfully" Jul 16 00:01:20.192286 kubelet[2732]: E0716 00:01:20.192148 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:22.192122 kubelet[2732]: E0716 00:01:22.192042 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:22.674249 systemd[1]: cri-containerd-b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8.scope: Deactivated successfully. Jul 16 00:01:22.674641 systemd[1]: cri-containerd-b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8.scope: Consumed 603ms CPU time, 178.8M memory peak, 3.6M read from disk, 171.2M written to disk. Jul 16 00:01:22.676235 containerd[1557]: time="2025-07-16T00:01:22.676194170Z" level=info msg="received exit event container_id:\"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\" id:\"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\" pid:3473 exited_at:{seconds:1752624082 nanos:675972400}" Jul 16 00:01:22.676554 containerd[1557]: time="2025-07-16T00:01:22.676329583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\" id:\"b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8\" pid:3473 exited_at:{seconds:1752624082 nanos:675972400}" Jul 16 00:01:22.698758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1824ef7c44446aa9d515b06e18100e7ad6822f531ccb76038f43faf2efff8b8-rootfs.mount: Deactivated successfully. Jul 16 00:01:22.738860 kubelet[2732]: I0716 00:01:22.738787 2732 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 16 00:01:22.973166 systemd[1]: Created slice kubepods-burstable-pod2dce6a97_1994_4607_8001_57db1c289cfe.slice - libcontainer container kubepods-burstable-pod2dce6a97_1994_4607_8001_57db1c289cfe.slice. Jul 16 00:01:22.980956 systemd[1]: Created slice kubepods-besteffort-pod7f96544d_bf37_4222_8f93_04881ef8f695.slice - libcontainer container kubepods-besteffort-pod7f96544d_bf37_4222_8f93_04881ef8f695.slice. Jul 16 00:01:22.986265 systemd[1]: Created slice kubepods-besteffort-podb92c213a_87e5_4f37_acbd_f00be20d467c.slice - libcontainer container kubepods-besteffort-podb92c213a_87e5_4f37_acbd_f00be20d467c.slice. Jul 16 00:01:22.992235 systemd[1]: Created slice kubepods-besteffort-pod4648f8e9_0227_4352_9e38_bf61552be348.slice - libcontainer container kubepods-besteffort-pod4648f8e9_0227_4352_9e38_bf61552be348.slice. Jul 16 00:01:22.998563 systemd[1]: Created slice kubepods-burstable-podd9be61b1_3e8f_41ca_90d3_f0c97043f509.slice - libcontainer container kubepods-burstable-podd9be61b1_3e8f_41ca_90d3_f0c97043f509.slice. Jul 16 00:01:23.004800 systemd[1]: Created slice kubepods-besteffort-poda083ba31_ad69_4526_9754_fc342add8585.slice - libcontainer container kubepods-besteffort-poda083ba31_ad69_4526_9754_fc342add8585.slice. Jul 16 00:01:23.008983 systemd[1]: Created slice kubepods-besteffort-poda7ace4d2_b2f1_4624_9e8a_f594c55a6acb.slice - libcontainer container kubepods-besteffort-poda7ace4d2_b2f1_4624_9e8a_f594c55a6acb.slice. Jul 16 00:01:23.152925 kubelet[2732]: I0716 00:01:23.152825 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9be61b1-3e8f-41ca-90d3-f0c97043f509-config-volume\") pod \"coredns-7c65d6cfc9-dhvwk\" (UID: \"d9be61b1-3e8f-41ca-90d3-f0c97043f509\") " pod="kube-system/coredns-7c65d6cfc9-dhvwk" Jul 16 00:01:23.152925 kubelet[2732]: I0716 00:01:23.152910 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a083ba31-ad69-4526-9754-fc342add8585-goldmane-key-pair\") pod \"goldmane-58fd7646b9-7k7zr\" (UID: \"a083ba31-ad69-4526-9754-fc342add8585\") " pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.152925 kubelet[2732]: I0716 00:01:23.152941 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cjqg\" (UniqueName: \"kubernetes.io/projected/b92c213a-87e5-4f37-acbd-f00be20d467c-kube-api-access-7cjqg\") pod \"calico-apiserver-768976f5fc-8jddx\" (UID: \"b92c213a-87e5-4f37-acbd-f00be20d467c\") " pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" Jul 16 00:01:23.153186 kubelet[2732]: I0716 00:01:23.152963 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a083ba31-ad69-4526-9754-fc342add8585-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-7k7zr\" (UID: \"a083ba31-ad69-4526-9754-fc342add8585\") " pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.153186 kubelet[2732]: I0716 00:01:23.152990 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4648f8e9-0227-4352-9e38-bf61552be348-calico-apiserver-certs\") pod \"calico-apiserver-768976f5fc-nds6v\" (UID: \"4648f8e9-0227-4352-9e38-bf61552be348\") " pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:23.153186 kubelet[2732]: I0716 00:01:23.153057 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f96544d-bf37-4222-8f93-04881ef8f695-tigera-ca-bundle\") pod \"calico-kube-controllers-7bdb5c84bc-cnbfc\" (UID: \"7f96544d-bf37-4222-8f93-04881ef8f695\") " pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" Jul 16 00:01:23.153186 kubelet[2732]: I0716 00:01:23.153161 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvg6c\" (UniqueName: \"kubernetes.io/projected/a083ba31-ad69-4526-9754-fc342add8585-kube-api-access-qvg6c\") pod \"goldmane-58fd7646b9-7k7zr\" (UID: \"a083ba31-ad69-4526-9754-fc342add8585\") " pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.153332 kubelet[2732]: I0716 00:01:23.153235 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpzs\" (UniqueName: \"kubernetes.io/projected/2dce6a97-1994-4607-8001-57db1c289cfe-kube-api-access-gjpzs\") pod \"coredns-7c65d6cfc9-hw98k\" (UID: \"2dce6a97-1994-4607-8001-57db1c289cfe\") " pod="kube-system/coredns-7c65d6cfc9-hw98k" Jul 16 00:01:23.153332 kubelet[2732]: I0716 00:01:23.153289 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dce6a97-1994-4607-8001-57db1c289cfe-config-volume\") pod \"coredns-7c65d6cfc9-hw98k\" (UID: \"2dce6a97-1994-4607-8001-57db1c289cfe\") " pod="kube-system/coredns-7c65d6cfc9-hw98k" Jul 16 00:01:23.153332 kubelet[2732]: I0716 00:01:23.153306 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a083ba31-ad69-4526-9754-fc342add8585-config\") pod \"goldmane-58fd7646b9-7k7zr\" (UID: \"a083ba31-ad69-4526-9754-fc342add8585\") " pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.153332 kubelet[2732]: I0716 00:01:23.153322 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b92c213a-87e5-4f37-acbd-f00be20d467c-calico-apiserver-certs\") pod \"calico-apiserver-768976f5fc-8jddx\" (UID: \"b92c213a-87e5-4f37-acbd-f00be20d467c\") " pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" Jul 16 00:01:23.153483 kubelet[2732]: I0716 00:01:23.153336 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs4jh\" (UniqueName: \"kubernetes.io/projected/d9be61b1-3e8f-41ca-90d3-f0c97043f509-kube-api-access-rs4jh\") pod \"coredns-7c65d6cfc9-dhvwk\" (UID: \"d9be61b1-3e8f-41ca-90d3-f0c97043f509\") " pod="kube-system/coredns-7c65d6cfc9-dhvwk" Jul 16 00:01:23.153483 kubelet[2732]: I0716 00:01:23.153363 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-backend-key-pair\") pod \"whisker-7999b996b-9mzsq\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:23.153483 kubelet[2732]: I0716 00:01:23.153405 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-ca-bundle\") pod \"whisker-7999b996b-9mzsq\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:23.153483 kubelet[2732]: I0716 00:01:23.153454 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84rm\" (UniqueName: \"kubernetes.io/projected/7f96544d-bf37-4222-8f93-04881ef8f695-kube-api-access-q84rm\") pod \"calico-kube-controllers-7bdb5c84bc-cnbfc\" (UID: \"7f96544d-bf37-4222-8f93-04881ef8f695\") " pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" Jul 16 00:01:23.153483 kubelet[2732]: I0716 00:01:23.153471 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwm9m\" (UniqueName: \"kubernetes.io/projected/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-kube-api-access-wwm9m\") pod \"whisker-7999b996b-9mzsq\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:23.153637 kubelet[2732]: I0716 00:01:23.153493 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6twn8\" (UniqueName: \"kubernetes.io/projected/4648f8e9-0227-4352-9e38-bf61552be348-kube-api-access-6twn8\") pod \"calico-apiserver-768976f5fc-nds6v\" (UID: \"4648f8e9-0227-4352-9e38-bf61552be348\") " pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:23.264893 containerd[1557]: time="2025-07-16T00:01:23.264847103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 00:01:23.295602 containerd[1557]: time="2025-07-16T00:01:23.295548702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:01:23.304733 containerd[1557]: time="2025-07-16T00:01:23.304660321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhvwk,Uid:d9be61b1-3e8f-41ca-90d3-f0c97043f509,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:23.309104 containerd[1557]: time="2025-07-16T00:01:23.309072381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-7k7zr,Uid:a083ba31-ad69-4526-9754-fc342add8585,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:23.312191 containerd[1557]: time="2025-07-16T00:01:23.312134235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7999b996b-9mzsq,Uid:a7ace4d2-b2f1-4624-9e8a-f594c55a6acb,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:23.400427 containerd[1557]: time="2025-07-16T00:01:23.399648998Z" level=error msg="Failed to destroy network for sandbox \"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.404192 containerd[1557]: time="2025-07-16T00:01:23.404135243Z" level=error msg="Failed to destroy network for sandbox \"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.407312 containerd[1557]: time="2025-07-16T00:01:23.407154904Z" level=error msg="Failed to destroy network for sandbox \"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.410447 containerd[1557]: time="2025-07-16T00:01:23.410369945Z" level=error msg="Failed to destroy network for sandbox \"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.453689 containerd[1557]: time="2025-07-16T00:01:23.453595025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhvwk,Uid:d9be61b1-3e8f-41ca-90d3-f0c97043f509,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.453902 containerd[1557]: time="2025-07-16T00:01:23.453626667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7999b996b-9mzsq,Uid:a7ace4d2-b2f1-4624-9e8a-f594c55a6acb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.453902 containerd[1557]: time="2025-07-16T00:01:23.453636025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.453993 containerd[1557]: time="2025-07-16T00:01:23.453647136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-7k7zr,Uid:a083ba31-ad69-4526-9754-fc342add8585,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.466732 kubelet[2732]: E0716 00:01:23.466547 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.466732 kubelet[2732]: E0716 00:01:23.466575 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.466732 kubelet[2732]: E0716 00:01:23.466640 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dhvwk" Jul 16 00:01:23.466732 kubelet[2732]: E0716 00:01:23.466660 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dhvwk" Jul 16 00:01:23.467344 kubelet[2732]: E0716 00:01:23.466545 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.467344 kubelet[2732]: E0716 00:01:23.466681 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:23.467344 kubelet[2732]: E0716 00:01:23.466707 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:23.467344 kubelet[2732]: E0716 00:01:23.466730 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.467506 kubelet[2732]: E0716 00:01:23.466750 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-7k7zr" Jul 16 00:01:23.467506 kubelet[2732]: E0716 00:01:23.466762 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768976f5fc-nds6v_calico-apiserver(4648f8e9-0227-4352-9e38-bf61552be348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768976f5fc-nds6v_calico-apiserver(4648f8e9-0227-4352-9e38-bf61552be348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a53f4ab1116652219a39c1b73ee13e2a0e589ee0c79a5a561d8b11b7b4dc149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" podUID="4648f8e9-0227-4352-9e38-bf61552be348" Jul 16 00:01:23.467506 kubelet[2732]: E0716 00:01:23.466547 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.467640 kubelet[2732]: E0716 00:01:23.466701 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dhvwk_kube-system(d9be61b1-3e8f-41ca-90d3-f0c97043f509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dhvwk_kube-system(d9be61b1-3e8f-41ca-90d3-f0c97043f509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c0807d607e9cdbea541f2c89a53dc92f9865b3d1f556ae020445c0bd4f095e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dhvwk" podUID="d9be61b1-3e8f-41ca-90d3-f0c97043f509" Jul 16 00:01:23.467640 kubelet[2732]: E0716 00:01:23.466824 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:23.467640 kubelet[2732]: E0716 00:01:23.466842 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:23.467760 kubelet[2732]: E0716 00:01:23.466882 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7999b996b-9mzsq_calico-system(a7ace4d2-b2f1-4624-9e8a-f594c55a6acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7999b996b-9mzsq_calico-system(a7ace4d2-b2f1-4624-9e8a-f594c55a6acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48ff83af7a5d1351c6fbc4c7f7063549a9b64acd5ac24a524e9a90c2f05f9171\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7999b996b-9mzsq" podUID="a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" Jul 16 00:01:23.467760 kubelet[2732]: E0716 00:01:23.466792 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-7k7zr_calico-system(a083ba31-ad69-4526-9754-fc342add8585)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-7k7zr_calico-system(a083ba31-ad69-4526-9754-fc342add8585)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4fc7820bdadbf6519d5e2dcd90edebf9090085e9ec201a2bcc0d4739195323f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-7k7zr" podUID="a083ba31-ad69-4526-9754-fc342add8585" Jul 16 00:01:23.579261 containerd[1557]: time="2025-07-16T00:01:23.579079404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hw98k,Uid:2dce6a97-1994-4607-8001-57db1c289cfe,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:23.584395 containerd[1557]: time="2025-07-16T00:01:23.584285152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bdb5c84bc-cnbfc,Uid:7f96544d-bf37-4222-8f93-04881ef8f695,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:23.589637 containerd[1557]: time="2025-07-16T00:01:23.589596686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-8jddx,Uid:b92c213a-87e5-4f37-acbd-f00be20d467c,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:01:23.633300 containerd[1557]: time="2025-07-16T00:01:23.633184639Z" level=error msg="Failed to destroy network for sandbox \"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.639950 containerd[1557]: time="2025-07-16T00:01:23.639891778Z" level=error msg="Failed to destroy network for sandbox \"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.640780 containerd[1557]: time="2025-07-16T00:01:23.640683412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hw98k,Uid:2dce6a97-1994-4607-8001-57db1c289cfe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.641139 kubelet[2732]: E0716 00:01:23.641101 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.641211 kubelet[2732]: E0716 00:01:23.641188 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hw98k" Jul 16 00:01:23.641257 kubelet[2732]: E0716 00:01:23.641212 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hw98k" Jul 16 00:01:23.641334 kubelet[2732]: E0716 00:01:23.641291 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hw98k_kube-system(2dce6a97-1994-4607-8001-57db1c289cfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hw98k_kube-system(2dce6a97-1994-4607-8001-57db1c289cfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ad8ea0bf3c2025b3fe83d761afd2098c32ab73e1a9e2feeb6e403224a9d4371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hw98k" podUID="2dce6a97-1994-4607-8001-57db1c289cfe" Jul 16 00:01:23.642410 containerd[1557]: time="2025-07-16T00:01:23.642358258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bdb5c84bc-cnbfc,Uid:7f96544d-bf37-4222-8f93-04881ef8f695,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.642687 kubelet[2732]: E0716 00:01:23.642647 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.642855 kubelet[2732]: E0716 00:01:23.642698 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" Jul 16 00:01:23.642855 kubelet[2732]: E0716 00:01:23.642718 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" Jul 16 00:01:23.642855 kubelet[2732]: E0716 00:01:23.642784 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bdb5c84bc-cnbfc_calico-system(7f96544d-bf37-4222-8f93-04881ef8f695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bdb5c84bc-cnbfc_calico-system(7f96544d-bf37-4222-8f93-04881ef8f695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60c499fb107ff094936e34ca59a8496663d92e51a0d73038694923098662bc4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" podUID="7f96544d-bf37-4222-8f93-04881ef8f695" Jul 16 00:01:23.649589 containerd[1557]: time="2025-07-16T00:01:23.649540637Z" level=error msg="Failed to destroy network for sandbox \"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.650947 containerd[1557]: time="2025-07-16T00:01:23.650908999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-8jddx,Uid:b92c213a-87e5-4f37-acbd-f00be20d467c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.651138 kubelet[2732]: E0716 00:01:23.651104 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:23.651183 kubelet[2732]: E0716 00:01:23.651155 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" Jul 16 00:01:23.651230 kubelet[2732]: E0716 00:01:23.651175 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" Jul 16 00:01:23.651254 kubelet[2732]: E0716 00:01:23.651228 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768976f5fc-8jddx_calico-apiserver(b92c213a-87e5-4f37-acbd-f00be20d467c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768976f5fc-8jddx_calico-apiserver(b92c213a-87e5-4f37-acbd-f00be20d467c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90ea2ac763cd3d8fdffb1edce2151e836c1e9924373f18843b5a2ec3ae868606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" podUID="b92c213a-87e5-4f37-acbd-f00be20d467c" Jul 16 00:01:24.199455 systemd[1]: Created slice kubepods-besteffort-podc4b970f5_c85f_469e_a5f5_523ed3eaf527.slice - libcontainer container kubepods-besteffort-podc4b970f5_c85f_469e_a5f5_523ed3eaf527.slice. Jul 16 00:01:24.202177 containerd[1557]: time="2025-07-16T00:01:24.202140446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwvbq,Uid:c4b970f5-c85f-469e-a5f5-523ed3eaf527,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:24.258561 containerd[1557]: time="2025-07-16T00:01:24.258498821Z" level=error msg="Failed to destroy network for sandbox \"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:24.260504 containerd[1557]: time="2025-07-16T00:01:24.260429991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwvbq,Uid:c4b970f5-c85f-469e-a5f5-523ed3eaf527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:24.260997 kubelet[2732]: E0716 00:01:24.260946 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:24.261111 kubelet[2732]: E0716 00:01:24.261017 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:24.261111 kubelet[2732]: E0716 00:01:24.261040 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wwvbq" Jul 16 00:01:24.261111 kubelet[2732]: E0716 00:01:24.261095 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wwvbq_calico-system(c4b970f5-c85f-469e-a5f5-523ed3eaf527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wwvbq_calico-system(c4b970f5-c85f-469e-a5f5-523ed3eaf527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f670f0d4e575d49abdfb98e8a9f7490461ef45a7d680441d358270af893bad68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wwvbq" podUID="c4b970f5-c85f-469e-a5f5-523ed3eaf527" Jul 16 00:01:24.261292 systemd[1]: run-netns-cni\x2df32c41bb\x2da7c6\x2d4397\x2dad46\x2d18590aa300f5.mount: Deactivated successfully. Jul 16 00:01:32.258411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581154109.mount: Deactivated successfully. Jul 16 00:01:33.949398 containerd[1557]: time="2025-07-16T00:01:33.949235133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:33.951473 containerd[1557]: time="2025-07-16T00:01:33.951426917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 16 00:01:33.953253 containerd[1557]: time="2025-07-16T00:01:33.953151201Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:33.956571 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:52724.service - OpenSSH per-connection server daemon (10.0.0.1:52724). Jul 16 00:01:33.956990 containerd[1557]: time="2025-07-16T00:01:33.956777221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:33.957913 containerd[1557]: time="2025-07-16T00:01:33.957100894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.692199917s" Jul 16 00:01:33.957913 containerd[1557]: time="2025-07-16T00:01:33.957165750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 16 00:01:33.974701 containerd[1557]: time="2025-07-16T00:01:33.974426249Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 00:01:34.064274 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 52724 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:34.066055 sshd-session[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:34.071039 systemd-logind[1537]: New session 8 of user core. Jul 16 00:01:34.081623 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:01:34.193243 containerd[1557]: time="2025-07-16T00:01:34.193157895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:01:34.196523 containerd[1557]: time="2025-07-16T00:01:34.193482741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7999b996b-9mzsq,Uid:a7ace4d2-b2f1-4624-9e8a-f594c55a6acb,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:34.205909 containerd[1557]: time="2025-07-16T00:01:34.205811505Z" level=info msg="Container 4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:34.212141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228955583.mount: Deactivated successfully. Jul 16 00:01:34.307817 containerd[1557]: time="2025-07-16T00:01:34.307675598Z" level=info msg="CreateContainer within sandbox \"f2d0843ff3ed6bfdcc8fcac87b212f26ce1e6c1b37b76c98083e0fdd9ca4c836\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\"" Jul 16 00:01:34.309247 containerd[1557]: time="2025-07-16T00:01:34.308569381Z" level=info msg="StartContainer for \"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\"" Jul 16 00:01:34.318742 containerd[1557]: time="2025-07-16T00:01:34.317890551Z" level=info msg="connecting to shim 4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04" address="unix:///run/containerd/s/22d320c2ff9abaac6581641870a60ef1bf5d6f888c71aba934cca9995ba6a387" protocol=ttrpc version=3 Jul 16 00:01:34.343551 sshd[3790]: Connection closed by 10.0.0.1 port 52724 Jul 16 00:01:34.344280 sshd-session[3786]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:34.349641 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:52724.service: Deactivated successfully. Jul 16 00:01:34.353052 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:01:34.357233 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:01:34.359175 systemd-logind[1537]: Removed session 8. Jul 16 00:01:34.370109 containerd[1557]: time="2025-07-16T00:01:34.370053870Z" level=error msg="Failed to destroy network for sandbox \"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.371231 containerd[1557]: time="2025-07-16T00:01:34.371151745Z" level=error msg="Failed to destroy network for sandbox \"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.371592 containerd[1557]: time="2025-07-16T00:01:34.371548690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7999b996b-9mzsq,Uid:a7ace4d2-b2f1-4624-9e8a-f594c55a6acb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.371810 kubelet[2732]: E0716 00:01:34.371765 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.372259 kubelet[2732]: E0716 00:01:34.371848 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:34.372259 kubelet[2732]: E0716 00:01:34.371871 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7999b996b-9mzsq" Jul 16 00:01:34.372920 containerd[1557]: time="2025-07-16T00:01:34.372515303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.373107 kubelet[2732]: E0716 00:01:34.372747 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7999b996b-9mzsq_calico-system(a7ace4d2-b2f1-4624-9e8a-f594c55a6acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7999b996b-9mzsq_calico-system(a7ace4d2-b2f1-4624-9e8a-f594c55a6acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7eea3070dd1b22ab3117c7e326a9bd682aa9e1eb06ed91ed7815529fef43222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7999b996b-9mzsq" podUID="a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" Jul 16 00:01:34.372712 systemd[1]: Started cri-containerd-4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04.scope - libcontainer container 4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04. Jul 16 00:01:34.373404 kubelet[2732]: E0716 00:01:34.372767 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:01:34.373609 kubelet[2732]: E0716 00:01:34.373561 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:34.373609 kubelet[2732]: E0716 00:01:34.373602 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" Jul 16 00:01:34.373721 kubelet[2732]: E0716 00:01:34.373664 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768976f5fc-nds6v_calico-apiserver(4648f8e9-0227-4352-9e38-bf61552be348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768976f5fc-nds6v_calico-apiserver(4648f8e9-0227-4352-9e38-bf61552be348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a87bcbf55b6c502a70ba967cd75e7a25591847a0a71c189c1d49dff515706c70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" podUID="4648f8e9-0227-4352-9e38-bf61552be348" Jul 16 00:01:34.419888 containerd[1557]: time="2025-07-16T00:01:34.419832484Z" level=info msg="StartContainer for \"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\" returns successfully" Jul 16 00:01:34.494253 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 00:01:34.494921 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 00:01:34.865604 kubelet[2732]: I0716 00:01:34.865523 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:01:34.964806 systemd[1]: run-netns-cni\x2d3795d209\x2d7d0a\x2d473c\x2dfd49\x2d6b35debcebeb.mount: Deactivated successfully. Jul 16 00:01:35.192704 containerd[1557]: time="2025-07-16T00:01:35.192424025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hw98k,Uid:2dce6a97-1994-4607-8001-57db1c289cfe,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:35.192704 containerd[1557]: time="2025-07-16T00:01:35.192470115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bdb5c84bc-cnbfc,Uid:7f96544d-bf37-4222-8f93-04881ef8f695,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:35.369137 systemd-networkd[1457]: cali4d7c451f7a9: Link UP Jul 16 00:01:35.369517 systemd-networkd[1457]: cali4d7c451f7a9: Gained carrier Jul 16 00:01:35.380817 kubelet[2732]: I0716 00:01:35.379935 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-npdx2" podStartSLOduration=2.713709527 podStartE2EDuration="24.379813353s" podCreationTimestamp="2025-07-16 00:01:11 +0000 UTC" firstStartedPulling="2025-07-16 00:01:12.292465249 +0000 UTC m=+16.266224784" lastFinishedPulling="2025-07-16 00:01:33.958569085 +0000 UTC m=+37.932328610" observedRunningTime="2025-07-16 00:01:35.312780739 +0000 UTC m=+39.286540284" watchObservedRunningTime="2025-07-16 00:01:35.379813353 +0000 UTC m=+39.353572888" Jul 16 00:01:35.384513 containerd[1557]: 2025-07-16 00:01:35.223 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:01:35.384513 containerd[1557]: 2025-07-16 00:01:35.240 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0 coredns-7c65d6cfc9- kube-system 2dce6a97-1994-4607-8001-57db1c289cfe 819 0 2025-07-16 00:01:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hw98k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d7c451f7a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-" Jul 16 00:01:35.384513 containerd[1557]: 2025-07-16 00:01:35.240 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.384513 containerd[1557]: 2025-07-16 00:01:35.321 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" HandleID="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Workload="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.323 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" HandleID="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Workload="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000475ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hw98k", "timestamp":"2025-07-16 00:01:35.321511896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.323 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.323 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.323 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.330 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" host="localhost" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.336 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.340 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.342 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.344 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:35.384805 containerd[1557]: 2025-07-16 00:01:35.344 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" host="localhost" Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.346 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.350 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" host="localhost" Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.355 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" host="localhost" Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.355 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" host="localhost" Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.355 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:35.385036 containerd[1557]: 2025-07-16 00:01:35.355 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" HandleID="k8s-pod-network.ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Workload="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.385176 containerd[1557]: 2025-07-16 00:01:35.360 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2dce6a97-1994-4607-8001-57db1c289cfe", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hw98k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7c451f7a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:35.385259 containerd[1557]: 2025-07-16 00:01:35.360 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.385259 containerd[1557]: 2025-07-16 00:01:35.360 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d7c451f7a9 ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.385259 containerd[1557]: 2025-07-16 00:01:35.369 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.385329 containerd[1557]: 2025-07-16 00:01:35.370 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2dce6a97-1994-4607-8001-57db1c289cfe", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d", Pod:"coredns-7c65d6cfc9-hw98k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d7c451f7a9", MAC:"f2:48:e1:2d:53:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:35.385329 containerd[1557]: 2025-07-16 00:01:35.379 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hw98k" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hw98k-eth0" Jul 16 00:01:35.429476 kubelet[2732]: I0716 00:01:35.429419 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-ca-bundle\") pod \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " Jul 16 00:01:35.429476 kubelet[2732]: I0716 00:01:35.429464 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwm9m\" (UniqueName: \"kubernetes.io/projected/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-kube-api-access-wwm9m\") pod \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " Jul 16 00:01:35.429476 kubelet[2732]: I0716 00:01:35.429483 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-backend-key-pair\") pod \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\" (UID: \"a7ace4d2-b2f1-4624-9e8a-f594c55a6acb\") " Jul 16 00:01:35.431129 kubelet[2732]: I0716 00:01:35.430579 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" (UID: "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 00:01:35.517358 kubelet[2732]: I0716 00:01:35.517312 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-kube-api-access-wwm9m" (OuterVolumeSpecName: "kube-api-access-wwm9m") pod "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" (UID: "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb"). InnerVolumeSpecName "kube-api-access-wwm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:01:35.517498 systemd[1]: var-lib-kubelet-pods-a7ace4d2\x2db2f1\x2d4624\x2d9e8a\x2df594c55a6acb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwm9m.mount: Deactivated successfully. Jul 16 00:01:35.517620 systemd[1]: var-lib-kubelet-pods-a7ace4d2\x2db2f1\x2d4624\x2d9e8a\x2df594c55a6acb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 00:01:35.519556 kubelet[2732]: I0716 00:01:35.519439 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" (UID: "a7ace4d2-b2f1-4624-9e8a-f594c55a6acb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 00:01:35.522414 containerd[1557]: time="2025-07-16T00:01:35.521970317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\" id:\"e0d8427f37e9d6d529c056ecd218b79b07b50603737605544a3181d7a2d780c7\" pid:3993 exit_status:1 exited_at:{seconds:1752624095 nanos:436921711}" Jul 16 00:01:35.530416 kubelet[2732]: I0716 00:01:35.530361 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 16 00:01:35.530416 kubelet[2732]: I0716 00:01:35.530413 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 16 00:01:35.530499 kubelet[2732]: I0716 00:01:35.530423 2732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwm9m\" (UniqueName: \"kubernetes.io/projected/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb-kube-api-access-wwm9m\") on node \"localhost\" DevicePath \"\"" Jul 16 00:01:35.552883 systemd-networkd[1457]: calia26543a3490: Link UP Jul 16 00:01:35.553305 systemd-networkd[1457]: calia26543a3490: Gained carrier Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.222 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.240 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0 calico-kube-controllers-7bdb5c84bc- calico-system 7f96544d-bf37-4222-8f93-04881ef8f695 827 0 2025-07-16 00:01:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bdb5c84bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bdb5c84bc-cnbfc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia26543a3490 [] [] }} ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.240 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.321 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" HandleID="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Workload="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.322 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" HandleID="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Workload="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011fe30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bdb5c84bc-cnbfc", "timestamp":"2025-07-16 00:01:35.321601469 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.322 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.355 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.356 [INFO][3964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.515 [INFO][3964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.525 [INFO][3964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.530 [INFO][3964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.532 [INFO][3964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.534 [INFO][3964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.534 [INFO][3964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.535 [INFO][3964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4 Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.540 [INFO][3964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.544 [INFO][3964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.544 [INFO][3964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" host="localhost" Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.544 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:35.568898 containerd[1557]: 2025-07-16 00:01:35.544 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" HandleID="k8s-pod-network.ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Workload="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.548 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0", GenerateName:"calico-kube-controllers-7bdb5c84bc-", Namespace:"calico-system", SelfLink:"", UID:"7f96544d-bf37-4222-8f93-04881ef8f695", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bdb5c84bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bdb5c84bc-cnbfc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia26543a3490", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.548 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.548 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia26543a3490 ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.554 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.555 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0", GenerateName:"calico-kube-controllers-7bdb5c84bc-", Namespace:"calico-system", SelfLink:"", UID:"7f96544d-bf37-4222-8f93-04881ef8f695", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bdb5c84bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4", Pod:"calico-kube-controllers-7bdb5c84bc-cnbfc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia26543a3490", MAC:"ca:2d:75:bd:d2:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:35.570143 containerd[1557]: 2025-07-16 00:01:35.563 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" Namespace="calico-system" Pod="calico-kube-controllers-7bdb5c84bc-cnbfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bdb5c84bc--cnbfc-eth0" Jul 16 00:01:35.653880 containerd[1557]: time="2025-07-16T00:01:35.653508839Z" level=info msg="connecting to shim ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4" address="unix:///run/containerd/s/c067b48831b9865790d43f717eca7bccdb47c56fb1725dbd0c6289214bafe9a2" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:35.655918 containerd[1557]: time="2025-07-16T00:01:35.655865107Z" level=info msg="connecting to shim ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d" address="unix:///run/containerd/s/6e2c8cf3300a0b3f427995fa215630e0c5690c6a945f01cb63369306b8517ce6" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:35.678515 systemd[1]: Started cri-containerd-ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d.scope - libcontainer container ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d. Jul 16 00:01:35.680435 systemd[1]: Started cri-containerd-ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4.scope - libcontainer container ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4. Jul 16 00:01:35.693368 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:35.695063 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:35.728814 containerd[1557]: time="2025-07-16T00:01:35.728754422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bdb5c84bc-cnbfc,Uid:7f96544d-bf37-4222-8f93-04881ef8f695,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4\"" Jul 16 00:01:35.730365 containerd[1557]: time="2025-07-16T00:01:35.730335427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hw98k,Uid:2dce6a97-1994-4607-8001-57db1c289cfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d\"" Jul 16 00:01:35.731918 containerd[1557]: time="2025-07-16T00:01:35.730545662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 00:01:35.734548 containerd[1557]: time="2025-07-16T00:01:35.734510119Z" level=info msg="CreateContainer within sandbox \"ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:01:35.746938 containerd[1557]: time="2025-07-16T00:01:35.746898751Z" level=info msg="Container d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:35.752648 containerd[1557]: time="2025-07-16T00:01:35.752609289Z" level=info msg="CreateContainer within sandbox \"ceb6074791056572e3122426a1507860efd169179c3072893965487c1a27ca6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77\"" Jul 16 00:01:35.753064 containerd[1557]: time="2025-07-16T00:01:35.753019049Z" level=info msg="StartContainer for \"d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77\"" Jul 16 00:01:35.769692 containerd[1557]: time="2025-07-16T00:01:35.769594305Z" level=info msg="connecting to shim d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77" address="unix:///run/containerd/s/6e2c8cf3300a0b3f427995fa215630e0c5690c6a945f01cb63369306b8517ce6" protocol=ttrpc version=3 Jul 16 00:01:35.807596 systemd[1]: Started cri-containerd-d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77.scope - libcontainer container d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77. Jul 16 00:01:35.870635 containerd[1557]: time="2025-07-16T00:01:35.870579086Z" level=info msg="StartContainer for \"d2b68f9b1a95f1b53ec82832dd06deca33b285fa42fb1552597c2f4659217a77\" returns successfully" Jul 16 00:01:36.192820 containerd[1557]: time="2025-07-16T00:01:36.192445133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhvwk,Uid:d9be61b1-3e8f-41ca-90d3-f0c97043f509,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:36.192820 containerd[1557]: time="2025-07-16T00:01:36.192769167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-7k7zr,Uid:a083ba31-ad69-4526-9754-fc342add8585,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:36.212000 systemd[1]: Removed slice kubepods-besteffort-poda7ace4d2_b2f1_4624_9e8a_f594c55a6acb.slice - libcontainer container kubepods-besteffort-poda7ace4d2_b2f1_4624_9e8a_f594c55a6acb.slice. Jul 16 00:01:36.319176 kubelet[2732]: I0716 00:01:36.318257 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hw98k" podStartSLOduration=36.318239501 podStartE2EDuration="36.318239501s" podCreationTimestamp="2025-07-16 00:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:01:36.316909299 +0000 UTC m=+40.290668834" watchObservedRunningTime="2025-07-16 00:01:36.318239501 +0000 UTC m=+40.291999036" Jul 16 00:01:36.323251 systemd-networkd[1457]: cali0d696ce7f40: Link UP Jul 16 00:01:36.328062 systemd-networkd[1457]: cali0d696ce7f40: Gained carrier Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.245 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0 coredns-7c65d6cfc9- kube-system d9be61b1-3e8f-41ca-90d3-f0c97043f509 830 0 2025-07-16 00:01:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dhvwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d696ce7f40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.245 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.275 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" HandleID="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Workload="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.275 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" HandleID="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Workload="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dhvwk", "timestamp":"2025-07-16 00:01:36.275568307 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.275 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.276 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.276 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.282 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.287 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.291 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.293 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.295 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.295 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.296 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.300 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.308 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.308 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" host="localhost" Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.309 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:36.366447 containerd[1557]: 2025-07-16 00:01:36.309 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" HandleID="k8s-pod-network.b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Workload="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.312 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9be61b1-3e8f-41ca-90d3-f0c97043f509", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dhvwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d696ce7f40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.313 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.313 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d696ce7f40 ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.325 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.330 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d9be61b1-3e8f-41ca-90d3-f0c97043f509", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b", Pod:"coredns-7c65d6cfc9-dhvwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d696ce7f40", MAC:"86:b8:c4:f5:e7:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.367171 containerd[1557]: 2025-07-16 00:01:36.358 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dhvwk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dhvwk-eth0" Jul 16 00:01:36.392785 systemd-networkd[1457]: vxlan.calico: Link UP Jul 16 00:01:36.392795 systemd-networkd[1457]: vxlan.calico: Gained carrier Jul 16 00:01:36.425747 containerd[1557]: time="2025-07-16T00:01:36.425585295Z" level=info msg="connecting to shim b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b" address="unix:///run/containerd/s/4b209569b97567dd2d827b264facd55c38769768a810381c872980a49e9a96f7" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:36.429572 systemd[1]: Created slice kubepods-besteffort-podac4b1a95_cbd2_47d0_9a89_f16a80683da1.slice - libcontainer container kubepods-besteffort-podac4b1a95_cbd2_47d0_9a89_f16a80683da1.slice. Jul 16 00:01:36.470542 systemd[1]: Started cri-containerd-b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b.scope - libcontainer container b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b. Jul 16 00:01:36.500981 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:36.501680 systemd-networkd[1457]: calic0496916930: Link UP Jul 16 00:01:36.503192 systemd-networkd[1457]: calic0496916930: Gained carrier Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.239 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0 goldmane-58fd7646b9- calico-system a083ba31-ad69-4526-9754-fc342add8585 832 0 2025-07-16 00:01:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-7k7zr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic0496916930 [] [] }} ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.239 [INFO][4278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.275 [INFO][4307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" HandleID="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Workload="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.276 [INFO][4307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" HandleID="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Workload="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c63a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-7k7zr", "timestamp":"2025-07-16 00:01:36.27586552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.276 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.313 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.313 [INFO][4307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.384 [INFO][4307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.408 [INFO][4307] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.443 [INFO][4307] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.449 [INFO][4307] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.457 [INFO][4307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.457 [INFO][4307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.460 [INFO][4307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001 Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.467 [INFO][4307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.480 [INFO][4307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.480 [INFO][4307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" host="localhost" Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.480 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:36.517044 containerd[1557]: 2025-07-16 00:01:36.480 [INFO][4307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" HandleID="k8s-pod-network.96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Workload="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.496 [INFO][4278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a083ba31-ad69-4526-9754-fc342add8585", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-7k7zr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0496916930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.496 [INFO][4278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.496 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0496916930 ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.503 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.503 [INFO][4278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a083ba31-ad69-4526-9754-fc342add8585", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001", Pod:"goldmane-58fd7646b9-7k7zr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0496916930", MAC:"36:6e:65:97:a2:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.517606 containerd[1557]: 2025-07-16 00:01:36.511 [INFO][4278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" Namespace="calico-system" Pod="goldmane-58fd7646b9-7k7zr" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--7k7zr-eth0" Jul 16 00:01:36.537931 kubelet[2732]: I0716 00:01:36.537868 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v2s\" (UniqueName: \"kubernetes.io/projected/ac4b1a95-cbd2-47d0-9a89-f16a80683da1-kube-api-access-j5v2s\") pod \"whisker-79586dd98-grp4f\" (UID: \"ac4b1a95-cbd2-47d0-9a89-f16a80683da1\") " pod="calico-system/whisker-79586dd98-grp4f" Jul 16 00:01:36.538410 kubelet[2732]: I0716 00:01:36.538039 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac4b1a95-cbd2-47d0-9a89-f16a80683da1-whisker-backend-key-pair\") pod \"whisker-79586dd98-grp4f\" (UID: \"ac4b1a95-cbd2-47d0-9a89-f16a80683da1\") " pod="calico-system/whisker-79586dd98-grp4f" Jul 16 00:01:36.538410 kubelet[2732]: I0716 00:01:36.538075 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac4b1a95-cbd2-47d0-9a89-f16a80683da1-whisker-ca-bundle\") pod \"whisker-79586dd98-grp4f\" (UID: \"ac4b1a95-cbd2-47d0-9a89-f16a80683da1\") " pod="calico-system/whisker-79586dd98-grp4f" Jul 16 00:01:36.543705 containerd[1557]: time="2025-07-16T00:01:36.543652595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dhvwk,Uid:d9be61b1-3e8f-41ca-90d3-f0c97043f509,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b\"" Jul 16 00:01:36.557400 containerd[1557]: time="2025-07-16T00:01:36.554547884Z" level=info msg="connecting to shim 96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001" address="unix:///run/containerd/s/b363066b66b7d2ae2e21f86811f8eb163e3ab01cc21d41699098cf3df0b86d02" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:36.558401 containerd[1557]: time="2025-07-16T00:01:36.557964402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\" id:\"132c5f03a30a0fe52e8121259c2d065a8f3ba4812494e2e76fc3149b5eca9596\" pid:4361 exit_status:1 exited_at:{seconds:1752624096 nanos:551706993}" Jul 16 00:01:36.559940 containerd[1557]: time="2025-07-16T00:01:36.558817504Z" level=info msg="CreateContainer within sandbox \"b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:01:36.567835 containerd[1557]: time="2025-07-16T00:01:36.567814188Z" level=info msg="Container 99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:36.580102 containerd[1557]: time="2025-07-16T00:01:36.580077982Z" level=info msg="CreateContainer within sandbox \"b2884d7689da842826421e626738d9503e8404bd1b5df3b78dcdc825cae7f12b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811\"" Jul 16 00:01:36.581019 containerd[1557]: time="2025-07-16T00:01:36.581000939Z" level=info msg="StartContainer for \"99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811\"" Jul 16 00:01:36.581543 systemd[1]: Started cri-containerd-96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001.scope - libcontainer container 96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001. Jul 16 00:01:36.582440 containerd[1557]: time="2025-07-16T00:01:36.582418639Z" level=info msg="connecting to shim 99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811" address="unix:///run/containerd/s/4b209569b97567dd2d827b264facd55c38769768a810381c872980a49e9a96f7" protocol=ttrpc version=3 Jul 16 00:01:36.596857 systemd-networkd[1457]: calia26543a3490: Gained IPv6LL Jul 16 00:01:36.597575 systemd-networkd[1457]: cali4d7c451f7a9: Gained IPv6LL Jul 16 00:01:36.604127 systemd[1]: Started cri-containerd-99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811.scope - libcontainer container 99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811. Jul 16 00:01:36.610896 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:36.658387 containerd[1557]: time="2025-07-16T00:01:36.658317563Z" level=info msg="StartContainer for \"99eebb9a8759f73206d8c0ff7782732543aab8cfa29a668570333eb11462c811\" returns successfully" Jul 16 00:01:36.660558 containerd[1557]: time="2025-07-16T00:01:36.660252608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-7k7zr,Uid:a083ba31-ad69-4526-9754-fc342add8585,Namespace:calico-system,Attempt:0,} returns sandbox id \"96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001\"" Jul 16 00:01:36.739314 containerd[1557]: time="2025-07-16T00:01:36.739189172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79586dd98-grp4f,Uid:ac4b1a95-cbd2-47d0-9a89-f16a80683da1,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:36.846320 systemd-networkd[1457]: cali45826cecff3: Link UP Jul 16 00:01:36.846940 systemd-networkd[1457]: cali45826cecff3: Gained carrier Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.775 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79586dd98--grp4f-eth0 whisker-79586dd98- calico-system ac4b1a95-cbd2-47d0-9a89-f16a80683da1 978 0 2025-07-16 00:01:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79586dd98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79586dd98-grp4f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali45826cecff3 [] [] }} ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.777 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.810 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" HandleID="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Workload="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.810 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" HandleID="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Workload="localhost-k8s-whisker--79586dd98--grp4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79586dd98-grp4f", "timestamp":"2025-07-16 00:01:36.810002292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.810 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.810 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.810 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.816 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.821 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.825 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.826 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.828 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.828 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.829 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7 Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.832 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.839 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.839 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" host="localhost" Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.839 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:36.861501 containerd[1557]: 2025-07-16 00:01:36.839 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" HandleID="k8s-pod-network.b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Workload="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.843 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79586dd98--grp4f-eth0", GenerateName:"whisker-79586dd98-", Namespace:"calico-system", SelfLink:"", UID:"ac4b1a95-cbd2-47d0-9a89-f16a80683da1", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79586dd98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79586dd98-grp4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45826cecff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.843 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.843 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45826cecff3 ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.845 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.846 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79586dd98--grp4f-eth0", GenerateName:"whisker-79586dd98-", Namespace:"calico-system", SelfLink:"", UID:"ac4b1a95-cbd2-47d0-9a89-f16a80683da1", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79586dd98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7", Pod:"whisker-79586dd98-grp4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali45826cecff3", MAC:"b6:b3:0f:f5:98:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:36.862131 containerd[1557]: 2025-07-16 00:01:36.857 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" Namespace="calico-system" Pod="whisker-79586dd98-grp4f" WorkloadEndpoint="localhost-k8s-whisker--79586dd98--grp4f-eth0" Jul 16 00:01:36.885272 containerd[1557]: time="2025-07-16T00:01:36.885222920Z" level=info msg="connecting to shim b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7" address="unix:///run/containerd/s/3c141c881fca1229011329b1404eb713536f8834206f4fe6a9a3dba22ec1ec17" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:36.917525 systemd[1]: Started cri-containerd-b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7.scope - libcontainer container b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7. Jul 16 00:01:36.930321 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:36.969271 containerd[1557]: time="2025-07-16T00:01:36.969215568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79586dd98-grp4f,Uid:ac4b1a95-cbd2-47d0-9a89-f16a80683da1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7\"" Jul 16 00:01:37.193058 containerd[1557]: time="2025-07-16T00:01:37.193003458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwvbq,Uid:c4b970f5-c85f-469e-a5f5-523ed3eaf527,Namespace:calico-system,Attempt:0,}" Jul 16 00:01:37.385513 kubelet[2732]: I0716 00:01:37.385202 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dhvwk" podStartSLOduration=37.385180623 podStartE2EDuration="37.385180623s" podCreationTimestamp="2025-07-16 00:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:01:37.38412532 +0000 UTC m=+41.357884855" watchObservedRunningTime="2025-07-16 00:01:37.385180623 +0000 UTC m=+41.358940158" Jul 16 00:01:37.505242 systemd-networkd[1457]: calib9f2cca054d: Link UP Jul 16 00:01:37.506649 systemd-networkd[1457]: calib9f2cca054d: Gained carrier Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.432 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wwvbq-eth0 csi-node-driver- calico-system c4b970f5-c85f-469e-a5f5-523ed3eaf527 709 0 2025-07-16 00:01:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wwvbq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib9f2cca054d [] [] }} ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.432 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.467 [INFO][4674] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" HandleID="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Workload="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.467 [INFO][4674] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" HandleID="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Workload="localhost-k8s-csi--node--driver--wwvbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wwvbq", "timestamp":"2025-07-16 00:01:37.467370761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.467 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.467 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.467 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.472 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.478 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.483 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.485 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.487 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.487 [INFO][4674] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.488 [INFO][4674] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624 Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.491 [INFO][4674] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.498 [INFO][4674] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.498 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" host="localhost" Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.498 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:37.528491 containerd[1557]: 2025-07-16 00:01:37.498 [INFO][4674] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" HandleID="k8s-pod-network.481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Workload="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.502 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wwvbq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4b970f5-c85f-469e-a5f5-523ed3eaf527", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wwvbq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9f2cca054d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.502 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.502 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9f2cca054d ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.507 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.507 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wwvbq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4b970f5-c85f-469e-a5f5-523ed3eaf527", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624", Pod:"csi-node-driver-wwvbq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9f2cca054d", MAC:"42:a2:6a:e6:e6:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:37.529070 containerd[1557]: 2025-07-16 00:01:37.524 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" Namespace="calico-system" Pod="csi-node-driver-wwvbq" WorkloadEndpoint="localhost-k8s-csi--node--driver--wwvbq-eth0" Jul 16 00:01:37.586117 containerd[1557]: time="2025-07-16T00:01:37.586063413Z" level=info msg="connecting to shim 481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624" address="unix:///run/containerd/s/3df6a222177a0acea4daa8205b80c91e8f5ee903fa7665995eea09f2e75d438f" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:37.610532 systemd[1]: Started cri-containerd-481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624.scope - libcontainer container 481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624. Jul 16 00:01:37.626587 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:37.646406 containerd[1557]: time="2025-07-16T00:01:37.646342064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wwvbq,Uid:c4b970f5-c85f-469e-a5f5-523ed3eaf527,Namespace:calico-system,Attempt:0,} returns sandbox id \"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624\"" Jul 16 00:01:37.941541 systemd-networkd[1457]: vxlan.calico: Gained IPv6LL Jul 16 00:01:38.132599 systemd-networkd[1457]: cali45826cecff3: Gained IPv6LL Jul 16 00:01:38.190544 containerd[1557]: time="2025-07-16T00:01:38.190479104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:38.191366 containerd[1557]: time="2025-07-16T00:01:38.191319952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 16 00:01:38.192722 containerd[1557]: time="2025-07-16T00:01:38.192619753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-8jddx,Uid:b92c213a-87e5-4f37-acbd-f00be20d467c,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:01:38.192843 containerd[1557]: time="2025-07-16T00:01:38.192801182Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:38.195532 kubelet[2732]: I0716 00:01:38.194885 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ace4d2-b2f1-4624-9e8a-f594c55a6acb" path="/var/lib/kubelet/pods/a7ace4d2-b2f1-4624-9e8a-f594c55a6acb/volumes" Jul 16 00:01:38.196669 containerd[1557]: time="2025-07-16T00:01:38.196133705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:38.196669 containerd[1557]: time="2025-07-16T00:01:38.196570896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.465073472s" Jul 16 00:01:38.196669 containerd[1557]: time="2025-07-16T00:01:38.196603068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 16 00:01:38.197630 systemd-networkd[1457]: calic0496916930: Gained IPv6LL Jul 16 00:01:38.201259 containerd[1557]: time="2025-07-16T00:01:38.201040277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 00:01:38.206325 containerd[1557]: time="2025-07-16T00:01:38.206302023Z" level=info msg="CreateContainer within sandbox \"ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 00:01:38.324556 systemd-networkd[1457]: cali0d696ce7f40: Gained IPv6LL Jul 16 00:01:38.349543 systemd-networkd[1457]: calieecf5a000ee: Link UP Jul 16 00:01:38.350443 systemd-networkd[1457]: calieecf5a000ee: Gained carrier Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.258 [INFO][4737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0 calico-apiserver-768976f5fc- calico-apiserver b92c213a-87e5-4f37-acbd-f00be20d467c 823 0 2025-07-16 00:01:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768976f5fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-768976f5fc-8jddx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieecf5a000ee [] [] }} ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.259 [INFO][4737] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.282 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" HandleID="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Workload="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.282 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" HandleID="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Workload="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-768976f5fc-8jddx", "timestamp":"2025-07-16 00:01:38.282423968 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.282 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.282 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.282 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.289 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.294 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.298 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.300 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.302 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.302 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.303 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.329 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.343 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.343 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" host="localhost" Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.343 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:38.372398 containerd[1557]: 2025-07-16 00:01:38.343 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" HandleID="k8s-pod-network.a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Workload="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.346 [INFO][4737] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0", GenerateName:"calico-apiserver-768976f5fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b92c213a-87e5-4f37-acbd-f00be20d467c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768976f5fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-768976f5fc-8jddx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieecf5a000ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.346 [INFO][4737] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.346 [INFO][4737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieecf5a000ee ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.350 [INFO][4737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.352 [INFO][4737] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0", GenerateName:"calico-apiserver-768976f5fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b92c213a-87e5-4f37-acbd-f00be20d467c", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768976f5fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c", Pod:"calico-apiserver-768976f5fc-8jddx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieecf5a000ee", MAC:"9a:3f:70:3f:ec:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:38.373011 containerd[1557]: 2025-07-16 00:01:38.369 [INFO][4737] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-8jddx" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--8jddx-eth0" Jul 16 00:01:38.533349 containerd[1557]: time="2025-07-16T00:01:38.533252430Z" level=info msg="Container 215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:38.540761 containerd[1557]: time="2025-07-16T00:01:38.540717336Z" level=info msg="CreateContainer within sandbox \"ddb8e2a139df0b94f7783d16aa94fb54ccd5865130041e91f94d5049f54a02b4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\"" Jul 16 00:01:38.541291 containerd[1557]: time="2025-07-16T00:01:38.541254620Z" level=info msg="StartContainer for \"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\"" Jul 16 00:01:38.542234 containerd[1557]: time="2025-07-16T00:01:38.542191442Z" level=info msg="connecting to shim 215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9" address="unix:///run/containerd/s/c067b48831b9865790d43f717eca7bccdb47c56fb1725dbd0c6289214bafe9a2" protocol=ttrpc version=3 Jul 16 00:01:38.554409 containerd[1557]: time="2025-07-16T00:01:38.554331003Z" level=info msg="connecting to shim a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c" address="unix:///run/containerd/s/29de11827ef51d3de7bedb67555228ebfbcf0c87434968d2d587b7707e18323b" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:38.562679 systemd[1]: Started cri-containerd-215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9.scope - libcontainer container 215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9. Jul 16 00:01:38.581540 systemd[1]: Started cri-containerd-a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c.scope - libcontainer container a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c. Jul 16 00:01:38.598096 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:38.636997 containerd[1557]: time="2025-07-16T00:01:38.636804722Z" level=info msg="StartContainer for \"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" returns successfully" Jul 16 00:01:38.640189 containerd[1557]: time="2025-07-16T00:01:38.640159919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-8jddx,Uid:b92c213a-87e5-4f37-acbd-f00be20d467c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c\"" Jul 16 00:01:38.964612 systemd-networkd[1457]: calib9f2cca054d: Gained IPv6LL Jul 16 00:01:39.330591 kubelet[2732]: I0716 00:01:39.330496 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bdb5c84bc-cnbfc" podStartSLOduration=24.862903274 podStartE2EDuration="27.330472862s" podCreationTimestamp="2025-07-16 00:01:12 +0000 UTC" firstStartedPulling="2025-07-16 00:01:35.729991574 +0000 UTC m=+39.703751109" lastFinishedPulling="2025-07-16 00:01:38.197561172 +0000 UTC m=+42.171320697" observedRunningTime="2025-07-16 00:01:39.329706669 +0000 UTC m=+43.303466204" watchObservedRunningTime="2025-07-16 00:01:39.330472862 +0000 UTC m=+43.304232407" Jul 16 00:01:39.361149 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:38078.service - OpenSSH per-connection server daemon (10.0.0.1:38078). Jul 16 00:01:39.419085 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 38078 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:39.421415 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:39.428159 systemd-logind[1537]: New session 9 of user core. Jul 16 00:01:39.432621 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:01:39.792474 sshd[4871]: Connection closed by 10.0.0.1 port 38078 Jul 16 00:01:39.792791 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:39.797697 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:38078.service: Deactivated successfully. Jul 16 00:01:39.799778 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:01:39.800531 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:01:39.801677 systemd-logind[1537]: Removed session 9. Jul 16 00:01:40.214532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598734911.mount: Deactivated successfully. Jul 16 00:01:40.308597 systemd-networkd[1457]: calieecf5a000ee: Gained IPv6LL Jul 16 00:01:40.322096 kubelet[2732]: I0716 00:01:40.322069 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:01:41.649587 containerd[1557]: time="2025-07-16T00:01:41.649538347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:41.650394 containerd[1557]: time="2025-07-16T00:01:41.650347775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 16 00:01:41.651591 containerd[1557]: time="2025-07-16T00:01:41.651559724Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:41.653722 containerd[1557]: time="2025-07-16T00:01:41.653647329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:41.654352 containerd[1557]: time="2025-07-16T00:01:41.654306727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.453233016s" Jul 16 00:01:41.654352 containerd[1557]: time="2025-07-16T00:01:41.654346503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 16 00:01:41.655531 containerd[1557]: time="2025-07-16T00:01:41.655440778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 00:01:41.656456 containerd[1557]: time="2025-07-16T00:01:41.656428086Z" level=info msg="CreateContainer within sandbox \"96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 00:01:41.664976 containerd[1557]: time="2025-07-16T00:01:41.664934834Z" level=info msg="Container dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:41.672839 containerd[1557]: time="2025-07-16T00:01:41.672778386Z" level=info msg="CreateContainer within sandbox \"96828cd48520d97e958400c488909ef89179225d5e087f8fe3b51d45a23fd001\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\"" Jul 16 00:01:41.673323 containerd[1557]: time="2025-07-16T00:01:41.673281713Z" level=info msg="StartContainer for \"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\"" Jul 16 00:01:41.675038 containerd[1557]: time="2025-07-16T00:01:41.675013243Z" level=info msg="connecting to shim dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662" address="unix:///run/containerd/s/b363066b66b7d2ae2e21f86811f8eb163e3ab01cc21d41699098cf3df0b86d02" protocol=ttrpc version=3 Jul 16 00:01:41.727663 kubelet[2732]: I0716 00:01:41.727620 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:01:41.729675 systemd[1]: Started cri-containerd-dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662.scope - libcontainer container dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662. Jul 16 00:01:41.778765 containerd[1557]: time="2025-07-16T00:01:41.778719318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" id:\"af4fc99d88c9a3a92d77c6b516e05177519e4d0ef9721577ef69629ce0eaf22d\" pid:4927 exited_at:{seconds:1752624101 nanos:778284331}" Jul 16 00:01:41.839408 containerd[1557]: time="2025-07-16T00:01:41.838477155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" id:\"cc804659bf605662287848da7004ff3bdc12a0d6b7ed45ddb79683df726a39c4\" pid:4963 exited_at:{seconds:1752624101 nanos:838119507}" Jul 16 00:01:42.029022 containerd[1557]: time="2025-07-16T00:01:42.028969086Z" level=info msg="StartContainer for \"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" returns successfully" Jul 16 00:01:42.415000 containerd[1557]: time="2025-07-16T00:01:42.414841197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" id:\"36c4554a9776925674d92eccb7c11e45122c8e09b6331f884526938fcb71e7df\" pid:4995 exit_status:1 exited_at:{seconds:1752624102 nanos:414468039}" Jul 16 00:01:42.643735 kubelet[2732]: I0716 00:01:42.643657 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-7k7zr" podStartSLOduration=26.650157787 podStartE2EDuration="31.643635523s" podCreationTimestamp="2025-07-16 00:01:11 +0000 UTC" firstStartedPulling="2025-07-16 00:01:36.661700396 +0000 UTC m=+40.635459931" lastFinishedPulling="2025-07-16 00:01:41.655178132 +0000 UTC m=+45.628937667" observedRunningTime="2025-07-16 00:01:42.642620681 +0000 UTC m=+46.616380247" watchObservedRunningTime="2025-07-16 00:01:42.643635523 +0000 UTC m=+46.617395058" Jul 16 00:01:43.411435 containerd[1557]: time="2025-07-16T00:01:43.411359950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" id:\"5708dc345819cb50acccf205b45bda59855c36677c6f97243f986b992f6a36b8\" pid:5022 exit_status:1 exited_at:{seconds:1752624103 nanos:411023373}" Jul 16 00:01:43.892148 containerd[1557]: time="2025-07-16T00:01:43.892082903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:43.893210 containerd[1557]: time="2025-07-16T00:01:43.893181936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 16 00:01:43.895891 containerd[1557]: time="2025-07-16T00:01:43.895826088Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:43.904955 containerd[1557]: time="2025-07-16T00:01:43.904821357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:43.905893 containerd[1557]: time="2025-07-16T00:01:43.905854643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.250379048s" Jul 16 00:01:43.905992 containerd[1557]: time="2025-07-16T00:01:43.905975165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 16 00:01:43.910503 containerd[1557]: time="2025-07-16T00:01:43.910460426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 00:01:43.912057 containerd[1557]: time="2025-07-16T00:01:43.912024101Z" level=info msg="CreateContainer within sandbox \"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 00:01:43.921626 containerd[1557]: time="2025-07-16T00:01:43.921575740Z" level=info msg="Container 8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:43.931350 containerd[1557]: time="2025-07-16T00:01:43.931292546Z" level=info msg="CreateContainer within sandbox \"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5\"" Jul 16 00:01:43.932193 containerd[1557]: time="2025-07-16T00:01:43.932098695Z" level=info msg="StartContainer for \"8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5\"" Jul 16 00:01:43.933675 containerd[1557]: time="2025-07-16T00:01:43.933589841Z" level=info msg="connecting to shim 8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5" address="unix:///run/containerd/s/3c141c881fca1229011329b1404eb713536f8834206f4fe6a9a3dba22ec1ec17" protocol=ttrpc version=3 Jul 16 00:01:43.957623 systemd[1]: Started cri-containerd-8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5.scope - libcontainer container 8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5. Jul 16 00:01:44.020783 containerd[1557]: time="2025-07-16T00:01:44.020748266Z" level=info msg="StartContainer for \"8ecb5f2b36c69b49fff1cb10a4014d2a1ba01a58ebffdec5d34ea2b0d84ea0f5\" returns successfully" Jul 16 00:01:44.812453 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:38092.service - OpenSSH per-connection server daemon (10.0.0.1:38092). Jul 16 00:01:44.876076 sshd[5074]: Accepted publickey for core from 10.0.0.1 port 38092 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:44.911758 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:44.918343 systemd-logind[1537]: New session 10 of user core. Jul 16 00:01:44.926542 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 00:01:45.062164 sshd[5076]: Connection closed by 10.0.0.1 port 38092 Jul 16 00:01:45.062519 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:45.072320 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:38092.service: Deactivated successfully. Jul 16 00:01:45.074888 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 00:01:45.079092 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Jul 16 00:01:45.081169 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:38098.service - OpenSSH per-connection server daemon (10.0.0.1:38098). Jul 16 00:01:45.083560 systemd-logind[1537]: Removed session 10. Jul 16 00:01:45.133507 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 38098 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:45.135312 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:45.140117 systemd-logind[1537]: New session 11 of user core. Jul 16 00:01:45.147524 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 00:01:45.291210 sshd[5092]: Connection closed by 10.0.0.1 port 38098 Jul 16 00:01:45.291970 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:45.305558 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:38098.service: Deactivated successfully. Jul 16 00:01:45.308245 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 00:01:45.310229 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Jul 16 00:01:45.314162 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:38104.service - OpenSSH per-connection server daemon (10.0.0.1:38104). Jul 16 00:01:45.315005 systemd-logind[1537]: Removed session 11. Jul 16 00:01:45.378843 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 38104 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:45.380688 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:45.385691 systemd-logind[1537]: New session 12 of user core. Jul 16 00:01:45.396525 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 00:01:45.511591 sshd[5106]: Connection closed by 10.0.0.1 port 38104 Jul 16 00:01:45.511866 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:45.515977 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:38104.service: Deactivated successfully. Jul 16 00:01:45.518230 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 00:01:45.519576 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Jul 16 00:01:45.521318 systemd-logind[1537]: Removed session 12. Jul 16 00:01:46.335702 containerd[1557]: time="2025-07-16T00:01:46.335056258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:46.337235 containerd[1557]: time="2025-07-16T00:01:46.337199068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 16 00:01:46.338628 containerd[1557]: time="2025-07-16T00:01:46.338591277Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:46.340731 containerd[1557]: time="2025-07-16T00:01:46.340692170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:46.341335 containerd[1557]: time="2025-07-16T00:01:46.341300614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.430797467s" Jul 16 00:01:46.341415 containerd[1557]: time="2025-07-16T00:01:46.341331212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 16 00:01:46.342324 containerd[1557]: time="2025-07-16T00:01:46.342063950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:01:46.343371 containerd[1557]: time="2025-07-16T00:01:46.343326807Z" level=info msg="CreateContainer within sandbox \"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 00:01:46.376900 containerd[1557]: time="2025-07-16T00:01:46.376854101Z" level=info msg="Container 0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:46.392996 containerd[1557]: time="2025-07-16T00:01:46.392954200Z" level=info msg="CreateContainer within sandbox \"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d\"" Jul 16 00:01:46.393650 containerd[1557]: time="2025-07-16T00:01:46.393625703Z" level=info msg="StartContainer for \"0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d\"" Jul 16 00:01:46.395150 containerd[1557]: time="2025-07-16T00:01:46.395120194Z" level=info msg="connecting to shim 0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d" address="unix:///run/containerd/s/3df6a222177a0acea4daa8205b80c91e8f5ee903fa7665995eea09f2e75d438f" protocol=ttrpc version=3 Jul 16 00:01:46.418545 systemd[1]: Started cri-containerd-0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d.scope - libcontainer container 0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d. Jul 16 00:01:46.475979 containerd[1557]: time="2025-07-16T00:01:46.475928017Z" level=info msg="StartContainer for \"0be73a3ada47e432251dd073f8b9390895143f1c5486376afe14513fa161cd6d\" returns successfully" Jul 16 00:01:49.210074 containerd[1557]: time="2025-07-16T00:01:49.210020578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:01:49.314271 systemd-networkd[1457]: calic1a9df9437b: Link UP Jul 16 00:01:49.315310 systemd-networkd[1457]: calic1a9df9437b: Gained carrier Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.248 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0 calico-apiserver-768976f5fc- calico-apiserver 4648f8e9-0227-4352-9e38-bf61552be348 829 0 2025-07-16 00:01:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768976f5fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-768976f5fc-nds6v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1a9df9437b [] [] }} ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.249 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.273 [INFO][5180] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" HandleID="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Workload="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.273 [INFO][5180] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" HandleID="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Workload="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000389ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-768976f5fc-nds6v", "timestamp":"2025-07-16 00:01:49.273470603 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.273 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.273 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.274 [INFO][5180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.281 [INFO][5180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.286 [INFO][5180] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.290 [INFO][5180] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.291 [INFO][5180] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.293 [INFO][5180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.294 [INFO][5180] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.295 [INFO][5180] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.301 [INFO][5180] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.307 [INFO][5180] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.307 [INFO][5180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" host="localhost" Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.307 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:01:49.332236 containerd[1557]: 2025-07-16 00:01:49.307 [INFO][5180] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" HandleID="k8s-pod-network.27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Workload="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.311 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0", GenerateName:"calico-apiserver-768976f5fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4648f8e9-0227-4352-9e38-bf61552be348", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768976f5fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-768976f5fc-nds6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1a9df9437b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.311 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.311 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1a9df9437b ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.314 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.315 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0", GenerateName:"calico-apiserver-768976f5fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4648f8e9-0227-4352-9e38-bf61552be348", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768976f5fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af", Pod:"calico-apiserver-768976f5fc-nds6v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1a9df9437b", MAC:"02:17:5b:fd:db:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:01:49.332839 containerd[1557]: 2025-07-16 00:01:49.325 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" Namespace="calico-apiserver" Pod="calico-apiserver-768976f5fc-nds6v" WorkloadEndpoint="localhost-k8s-calico--apiserver--768976f5fc--nds6v-eth0" Jul 16 00:01:49.375849 containerd[1557]: time="2025-07-16T00:01:49.375805050Z" level=info msg="connecting to shim 27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af" address="unix:///run/containerd/s/eefc5fa46296a163b1a2dbe3fbe879a4a97e59146df91cd8e8d3efa860130a56" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:49.386520 containerd[1557]: time="2025-07-16T00:01:49.386466917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:49.388172 containerd[1557]: time="2025-07-16T00:01:49.388135692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 16 00:01:49.389273 containerd[1557]: time="2025-07-16T00:01:49.389235975Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:49.391078 containerd[1557]: time="2025-07-16T00:01:49.391035335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:49.391678 containerd[1557]: time="2025-07-16T00:01:49.391634493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.049541048s" Jul 16 00:01:49.391678 containerd[1557]: time="2025-07-16T00:01:49.391673067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 00:01:49.392844 containerd[1557]: time="2025-07-16T00:01:49.392814797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 00:01:49.397104 containerd[1557]: time="2025-07-16T00:01:49.397070125Z" level=info msg="CreateContainer within sandbox \"a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:01:49.407593 systemd[1]: Started cri-containerd-27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af.scope - libcontainer container 27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af. Jul 16 00:01:49.412114 containerd[1557]: time="2025-07-16T00:01:49.412062632Z" level=info msg="Container 37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:49.419473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444692487.mount: Deactivated successfully. Jul 16 00:01:49.425165 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 16 00:01:49.427312 containerd[1557]: time="2025-07-16T00:01:49.427244877Z" level=info msg="CreateContainer within sandbox \"a7aacd135cdcd182ef81ce503c17c0d17f783b3e4156a5664b1f2b352447a40c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a\"" Jul 16 00:01:49.429025 containerd[1557]: time="2025-07-16T00:01:49.428127229Z" level=info msg="StartContainer for \"37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a\"" Jul 16 00:01:49.430911 containerd[1557]: time="2025-07-16T00:01:49.430890184Z" level=info msg="connecting to shim 37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a" address="unix:///run/containerd/s/29de11827ef51d3de7bedb67555228ebfbcf0c87434968d2d587b7707e18323b" protocol=ttrpc version=3 Jul 16 00:01:49.459676 systemd[1]: Started cri-containerd-37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a.scope - libcontainer container 37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a. Jul 16 00:01:49.465110 containerd[1557]: time="2025-07-16T00:01:49.464956618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768976f5fc-nds6v,Uid:4648f8e9-0227-4352-9e38-bf61552be348,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af\"" Jul 16 00:01:49.469282 containerd[1557]: time="2025-07-16T00:01:49.469247042Z" level=info msg="CreateContainer within sandbox \"27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:01:49.478754 containerd[1557]: time="2025-07-16T00:01:49.478719248Z" level=info msg="Container 472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:49.489217 containerd[1557]: time="2025-07-16T00:01:49.489185287Z" level=info msg="CreateContainer within sandbox \"27e163eb68014982abe10520afd0e6de6e1b5655056fe7df2b8636fe750a19af\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6\"" Jul 16 00:01:49.489922 containerd[1557]: time="2025-07-16T00:01:49.489893030Z" level=info msg="StartContainer for \"472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6\"" Jul 16 00:01:49.491270 containerd[1557]: time="2025-07-16T00:01:49.491244427Z" level=info msg="connecting to shim 472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6" address="unix:///run/containerd/s/eefc5fa46296a163b1a2dbe3fbe879a4a97e59146df91cd8e8d3efa860130a56" protocol=ttrpc version=3 Jul 16 00:01:49.516648 systemd[1]: Started cri-containerd-472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6.scope - libcontainer container 472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6. Jul 16 00:01:49.522365 containerd[1557]: time="2025-07-16T00:01:49.522310438Z" level=info msg="StartContainer for \"37f78978f8f3dfb9defcad1be133222004dcfa645283b40af165605d21cb023a\" returns successfully" Jul 16 00:01:49.729183 containerd[1557]: time="2025-07-16T00:01:49.729052295Z" level=info msg="StartContainer for \"472bf53b8954e8a4a26960d589c49247ecfa3b77cddf5e7270a0e2b5dfd4eff6\" returns successfully" Jul 16 00:01:50.529549 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:46326.service - OpenSSH per-connection server daemon (10.0.0.1:46326). Jul 16 00:01:50.591744 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 46326 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:50.593195 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:50.597693 systemd-logind[1537]: New session 13 of user core. Jul 16 00:01:50.605568 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 00:01:50.667954 kubelet[2732]: I0716 00:01:50.667897 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768976f5fc-nds6v" podStartSLOduration=41.667879952 podStartE2EDuration="41.667879952s" podCreationTimestamp="2025-07-16 00:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:01:50.66710967 +0000 UTC m=+54.640869205" watchObservedRunningTime="2025-07-16 00:01:50.667879952 +0000 UTC m=+54.641639487" Jul 16 00:01:50.932557 systemd-networkd[1457]: calic1a9df9437b: Gained IPv6LL Jul 16 00:01:51.137927 kubelet[2732]: I0716 00:01:51.136235 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-768976f5fc-8jddx" podStartSLOduration=31.385621355 podStartE2EDuration="42.136197662s" podCreationTimestamp="2025-07-16 00:01:09 +0000 UTC" firstStartedPulling="2025-07-16 00:01:38.641766019 +0000 UTC m=+42.615525555" lastFinishedPulling="2025-07-16 00:01:49.392342327 +0000 UTC m=+53.366101862" observedRunningTime="2025-07-16 00:01:51.131742631 +0000 UTC m=+55.105502156" watchObservedRunningTime="2025-07-16 00:01:51.136197662 +0000 UTC m=+55.109957217" Jul 16 00:01:51.146843 sshd[5324]: Connection closed by 10.0.0.1 port 46326 Jul 16 00:01:51.147593 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:51.153603 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:46326.service: Deactivated successfully. Jul 16 00:01:51.156169 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 00:01:51.157091 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Jul 16 00:01:51.159241 systemd-logind[1537]: Removed session 13. Jul 16 00:01:51.365973 kubelet[2732]: I0716 00:01:51.365919 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:01:53.874315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477977252.mount: Deactivated successfully. Jul 16 00:01:53.939837 containerd[1557]: time="2025-07-16T00:01:53.939760821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:53.940661 containerd[1557]: time="2025-07-16T00:01:53.940559979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 16 00:01:53.941835 containerd[1557]: time="2025-07-16T00:01:53.941798978Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:53.943999 containerd[1557]: time="2025-07-16T00:01:53.943968993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:53.944783 containerd[1557]: time="2025-07-16T00:01:53.944747592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.55189841s" Jul 16 00:01:53.944819 containerd[1557]: time="2025-07-16T00:01:53.944783541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 16 00:01:53.945917 containerd[1557]: time="2025-07-16T00:01:53.945884438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 00:01:53.947442 containerd[1557]: time="2025-07-16T00:01:53.947002118Z" level=info msg="CreateContainer within sandbox \"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 00:01:53.956660 containerd[1557]: time="2025-07-16T00:01:53.956624449Z" level=info msg="Container 9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:53.966307 containerd[1557]: time="2025-07-16T00:01:53.966260667Z" level=info msg="CreateContainer within sandbox \"b01d749a86535287ece044033fbae249af1657fc049216890899e0a59292f0d7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf\"" Jul 16 00:01:53.966782 containerd[1557]: time="2025-07-16T00:01:53.966743588Z" level=info msg="StartContainer for \"9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf\"" Jul 16 00:01:53.967980 containerd[1557]: time="2025-07-16T00:01:53.967949314Z" level=info msg="connecting to shim 9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf" address="unix:///run/containerd/s/3c141c881fca1229011329b1404eb713536f8834206f4fe6a9a3dba22ec1ec17" protocol=ttrpc version=3 Jul 16 00:01:54.002525 systemd[1]: Started cri-containerd-9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf.scope - libcontainer container 9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf. Jul 16 00:01:54.166074 containerd[1557]: time="2025-07-16T00:01:54.165956431Z" level=info msg="StartContainer for \"9c642c84638810bb037fdb5dfa49a4c41ddd5080f357a35a16a0049f742869cf\" returns successfully" Jul 16 00:01:55.793899 containerd[1557]: time="2025-07-16T00:01:55.793840181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\" id:\"849f0b77fd513f92d927286749668c4a66e2a3a2f47345aa3a7fefdfbea059c9\" pid:5409 exit_status:1 exited_at:{seconds:1752624115 nanos:793404088}" Jul 16 00:01:56.074344 containerd[1557]: time="2025-07-16T00:01:56.074197018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:56.075142 containerd[1557]: time="2025-07-16T00:01:56.075086598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 16 00:01:56.076299 containerd[1557]: time="2025-07-16T00:01:56.076262781Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:56.078503 containerd[1557]: time="2025-07-16T00:01:56.078449634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:01:56.078932 containerd[1557]: time="2025-07-16T00:01:56.078894033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.132973436s" Jul 16 00:01:56.078932 containerd[1557]: time="2025-07-16T00:01:56.078923599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 16 00:01:56.081296 containerd[1557]: time="2025-07-16T00:01:56.080865829Z" level=info msg="CreateContainer within sandbox \"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 00:01:56.090340 containerd[1557]: time="2025-07-16T00:01:56.090289978Z" level=info msg="Container 23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:56.170862 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:46336.service - OpenSSH per-connection server daemon (10.0.0.1:46336). Jul 16 00:01:56.182944 containerd[1557]: time="2025-07-16T00:01:56.182889809Z" level=info msg="CreateContainer within sandbox \"481528788bc2d632f1b1d433235c9fd7dfda36708212db10ee708a626c687624\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5\"" Jul 16 00:01:56.183325 containerd[1557]: time="2025-07-16T00:01:56.183300535Z" level=info msg="StartContainer for \"23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5\"" Jul 16 00:01:56.185013 containerd[1557]: time="2025-07-16T00:01:56.184986921Z" level=info msg="connecting to shim 23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5" address="unix:///run/containerd/s/3df6a222177a0acea4daa8205b80c91e8f5ee903fa7665995eea09f2e75d438f" protocol=ttrpc version=3 Jul 16 00:01:56.229790 systemd[1]: Started cri-containerd-23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5.scope - libcontainer container 23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5. Jul 16 00:01:56.249804 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:01:56.251574 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:56.258297 systemd-logind[1537]: New session 14 of user core. Jul 16 00:01:56.266549 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 00:01:56.274224 containerd[1557]: time="2025-07-16T00:01:56.274163654Z" level=info msg="StartContainer for \"23d9fb08942fe71331b52b188c70c61c4125c1469552e0777f334e8b58db1cd5\" returns successfully" Jul 16 00:01:56.397422 kubelet[2732]: I0716 00:01:56.396238 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79586dd98-grp4f" podStartSLOduration=3.42088183 podStartE2EDuration="20.396222051s" podCreationTimestamp="2025-07-16 00:01:36 +0000 UTC" firstStartedPulling="2025-07-16 00:01:36.970318652 +0000 UTC m=+40.944078187" lastFinishedPulling="2025-07-16 00:01:53.945658872 +0000 UTC m=+57.919418408" observedRunningTime="2025-07-16 00:01:54.393651049 +0000 UTC m=+58.367410575" watchObservedRunningTime="2025-07-16 00:01:56.396222051 +0000 UTC m=+60.369981587" Jul 16 00:01:56.417438 sshd[5457]: Connection closed by 10.0.0.1 port 46336 Jul 16 00:01:56.417734 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:56.421660 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:46336.service: Deactivated successfully. Jul 16 00:01:56.423837 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 00:01:56.424544 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Jul 16 00:01:56.425878 systemd-logind[1537]: Removed session 14. Jul 16 00:01:57.259162 kubelet[2732]: I0716 00:01:57.259121 2732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 00:01:57.259162 kubelet[2732]: I0716 00:01:57.259181 2732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 00:01:59.988887 containerd[1557]: time="2025-07-16T00:01:59.988839973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" id:\"8977a7d2c226929ff57409d9783339a655a6977f80c04f3a8828e15b3956859e\" pid:5493 exited_at:{seconds:1752624119 nanos:988461967}" Jul 16 00:02:00.137161 kubelet[2732]: I0716 00:02:00.137057 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wwvbq" podStartSLOduration=29.70531789 podStartE2EDuration="48.137038894s" podCreationTimestamp="2025-07-16 00:01:12 +0000 UTC" firstStartedPulling="2025-07-16 00:01:37.647874825 +0000 UTC m=+41.621634360" lastFinishedPulling="2025-07-16 00:01:56.079595829 +0000 UTC m=+60.053355364" observedRunningTime="2025-07-16 00:01:56.398936401 +0000 UTC m=+60.372695936" watchObservedRunningTime="2025-07-16 00:02:00.137038894 +0000 UTC m=+64.110798429" Jul 16 00:02:01.439911 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:60850.service - OpenSSH per-connection server daemon (10.0.0.1:60850). Jul 16 00:02:01.675295 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 60850 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:01.676888 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:01.683333 systemd-logind[1537]: New session 15 of user core. Jul 16 00:02:01.691689 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:02:01.837215 sshd[5511]: Connection closed by 10.0.0.1 port 60850 Jul 16 00:02:01.837602 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:01.845101 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:60850.service: Deactivated successfully. Jul 16 00:02:01.847339 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:02:01.848232 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:02:01.849712 systemd-logind[1537]: Removed session 15. Jul 16 00:02:06.849507 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:60876.service - OpenSSH per-connection server daemon (10.0.0.1:60876). Jul 16 00:02:06.901208 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:06.902749 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:06.907420 systemd-logind[1537]: New session 16 of user core. Jul 16 00:02:06.919511 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:02:07.105621 sshd[5526]: Connection closed by 10.0.0.1 port 60876 Jul 16 00:02:07.105845 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:07.111425 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:60876.service: Deactivated successfully. Jul 16 00:02:07.113305 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:02:07.114096 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:02:07.115156 systemd-logind[1537]: Removed session 16. Jul 16 00:02:11.775909 containerd[1557]: time="2025-07-16T00:02:11.775812176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" id:\"5f4a86728d4ef6664bb363a40a1a327c2f06fccc2335983016a67ef005755a7b\" pid:5555 exited_at:{seconds:1752624131 nanos:775469626}" Jul 16 00:02:12.121297 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:60138.service - OpenSSH per-connection server daemon (10.0.0.1:60138). Jul 16 00:02:12.318458 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 60138 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:12.319749 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:12.323938 systemd-logind[1537]: New session 17 of user core. Jul 16 00:02:12.330604 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:02:12.467513 sshd[5568]: Connection closed by 10.0.0.1 port 60138 Jul 16 00:02:12.468250 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:12.474474 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:60138.service: Deactivated successfully. Jul 16 00:02:12.476522 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:02:12.477438 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:02:12.478773 systemd-logind[1537]: Removed session 17. Jul 16 00:02:15.185742 kubelet[2732]: I0716 00:02:15.185690 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:02:17.484316 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:60152.service - OpenSSH per-connection server daemon (10.0.0.1:60152). Jul 16 00:02:18.095668 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 60152 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:18.097130 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:18.101559 systemd-logind[1537]: New session 18 of user core. Jul 16 00:02:18.111507 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:02:18.367923 sshd[5591]: Connection closed by 10.0.0.1 port 60152 Jul 16 00:02:18.368894 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:18.379987 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:60152.service: Deactivated successfully. Jul 16 00:02:18.382482 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:02:18.383935 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:02:18.387726 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:40708.service - OpenSSH per-connection server daemon (10.0.0.1:40708). Jul 16 00:02:18.388659 systemd-logind[1537]: Removed session 18. Jul 16 00:02:18.436354 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 40708 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:18.438083 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:18.443437 systemd-logind[1537]: New session 19 of user core. Jul 16 00:02:18.449597 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:02:18.651622 sshd[5607]: Connection closed by 10.0.0.1 port 40708 Jul 16 00:02:18.651943 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:18.661585 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:40708.service: Deactivated successfully. Jul 16 00:02:18.663737 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:02:18.664629 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:02:18.668214 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:40716.service - OpenSSH per-connection server daemon (10.0.0.1:40716). Jul 16 00:02:18.669346 systemd-logind[1537]: Removed session 19. Jul 16 00:02:18.721462 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 40716 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:18.723337 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:18.728530 systemd-logind[1537]: New session 20 of user core. Jul 16 00:02:18.741566 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:02:21.102589 sshd[5621]: Connection closed by 10.0.0.1 port 40716 Jul 16 00:02:21.102918 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:21.116979 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:40716.service: Deactivated successfully. Jul 16 00:02:21.119047 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:02:21.119272 systemd[1]: session-20.scope: Consumed 662ms CPU time, 73.4M memory peak. Jul 16 00:02:21.120282 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:02:21.124085 systemd-logind[1537]: Removed session 20. Jul 16 00:02:21.125922 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:40736.service - OpenSSH per-connection server daemon (10.0.0.1:40736). Jul 16 00:02:21.179670 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 40736 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:21.181317 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:21.186034 systemd-logind[1537]: New session 21 of user core. Jul 16 00:02:21.194502 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:02:21.554989 sshd[5644]: Connection closed by 10.0.0.1 port 40736 Jul 16 00:02:21.555787 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:21.564670 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:40736.service: Deactivated successfully. Jul 16 00:02:21.566663 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:02:21.567564 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:02:21.570904 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:40752.service - OpenSSH per-connection server daemon (10.0.0.1:40752). Jul 16 00:02:21.571747 systemd-logind[1537]: Removed session 21. Jul 16 00:02:21.622067 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 40752 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:21.623716 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:21.628654 systemd-logind[1537]: New session 22 of user core. Jul 16 00:02:21.638513 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:02:21.750508 sshd[5657]: Connection closed by 10.0.0.1 port 40752 Jul 16 00:02:21.750811 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:21.754791 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:40752.service: Deactivated successfully. Jul 16 00:02:21.757195 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:02:21.757938 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:02:21.759255 systemd-logind[1537]: Removed session 22. Jul 16 00:02:25.795590 containerd[1557]: time="2025-07-16T00:02:25.795548048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d05e500a337506d40662fb81450fdda374a2a425da48ecaa2f7996817c06a04\" id:\"e73b61175bb0946ddf552a0509d110f25ddaf6fc878a886f8f35d931ee86f69b\" pid:5680 exited_at:{seconds:1752624145 nanos:795226416}" Jul 16 00:02:26.764535 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:40838.service - OpenSSH per-connection server daemon (10.0.0.1:40838). Jul 16 00:02:26.815082 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 40838 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:26.816429 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:26.820744 systemd-logind[1537]: New session 23 of user core. Jul 16 00:02:26.830521 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:02:26.939970 sshd[5699]: Connection closed by 10.0.0.1 port 40838 Jul 16 00:02:26.940269 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:26.943988 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:40838.service: Deactivated successfully. Jul 16 00:02:26.945884 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:02:26.946797 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:02:26.948110 systemd-logind[1537]: Removed session 23. Jul 16 00:02:27.258660 containerd[1557]: time="2025-07-16T00:02:27.258417892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" id:\"7c64c7ec34d285785d04d4a683f8da9188907bf6d099bd4e5f2a156feb1b6a89\" pid:5724 exited_at:{seconds:1752624147 nanos:258220587}" Jul 16 00:02:29.977341 containerd[1557]: time="2025-07-16T00:02:29.977272858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" id:\"99d7d1b24cb8d31abf69eed59e9bf9668c0f125c9b32a450ece50b739081a913\" pid:5748 exited_at:{seconds:1752624149 nanos:976928903}" Jul 16 00:02:31.952565 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Jul 16 00:02:32.003873 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:32.005740 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:32.010788 systemd-logind[1537]: New session 24 of user core. Jul 16 00:02:32.020558 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 16 00:02:32.141746 sshd[5765]: Connection closed by 10.0.0.1 port 38616 Jul 16 00:02:32.142087 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:32.149804 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:38616.service: Deactivated successfully. Jul 16 00:02:32.154030 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 00:02:32.157550 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Jul 16 00:02:32.161501 systemd-logind[1537]: Removed session 24. Jul 16 00:02:37.161097 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:38630.service - OpenSSH per-connection server daemon (10.0.0.1:38630). Jul 16 00:02:37.218643 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 38630 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:37.220125 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:37.224614 systemd-logind[1537]: New session 25 of user core. Jul 16 00:02:37.230562 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 16 00:02:37.366074 sshd[5780]: Connection closed by 10.0.0.1 port 38630 Jul 16 00:02:37.366411 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:37.370951 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:38630.service: Deactivated successfully. Jul 16 00:02:37.373117 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 00:02:37.374065 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Jul 16 00:02:37.375597 systemd-logind[1537]: Removed session 25. Jul 16 00:02:40.713327 containerd[1557]: time="2025-07-16T00:02:40.713278663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc1125e4522327eeb79574f2f42fb76dbf947619bd51ec9240312c96199c8662\" id:\"f35da26379ffb8fd251b374f827bc058244f57d7e5da112fe447cfc46700d1de\" pid:5804 exited_at:{seconds:1752624160 nanos:712894671}" Jul 16 00:02:41.769445 containerd[1557]: time="2025-07-16T00:02:41.769358717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215a2c0f6af2734098e89164477caf5eef8a47c1f2bddfc96f176671fc516cf9\" id:\"41675797ee954a6f1fe99ce4f7005afe0c1e6e39bfb4864720da65e6323447f9\" pid:5827 exited_at:{seconds:1752624161 nanos:768791716}" Jul 16 00:02:42.382689 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:53244.service - OpenSSH per-connection server daemon (10.0.0.1:53244). Jul 16 00:02:42.433144 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 53244 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 16 00:02:42.435138 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:02:42.439789 systemd-logind[1537]: New session 26 of user core. Jul 16 00:02:42.447506 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 16 00:02:42.598915 sshd[5840]: Connection closed by 10.0.0.1 port 53244 Jul 16 00:02:42.599245 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Jul 16 00:02:42.603444 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:53244.service: Deactivated successfully. Jul 16 00:02:42.605696 systemd[1]: session-26.scope: Deactivated successfully. Jul 16 00:02:42.606564 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. Jul 16 00:02:42.607880 systemd-logind[1537]: Removed session 26.