Jul 12 00:14:13.161910 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 22:06:57 -00 2025 Jul 12 00:14:13.161935 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:14:13.161944 kernel: BIOS-provided physical RAM map: Jul 12 00:14:13.161951 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 12 00:14:13.161957 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 12 00:14:13.161964 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 12 00:14:13.161972 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 12 00:14:13.161980 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 12 00:14:13.161991 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 12 00:14:13.161998 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 12 00:14:13.162004 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 00:14:13.162011 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 12 00:14:13.162017 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 00:14:13.162024 kernel: NX (Execute Disable) protection: active Jul 12 00:14:13.162035 kernel: APIC: Static calls initialized Jul 12 00:14:13.162042 kernel: SMBIOS 2.8 present. Jul 12 00:14:13.162052 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 12 00:14:13.162059 kernel: DMI: Memory slots populated: 1/1 Jul 12 00:14:13.162066 kernel: Hypervisor detected: KVM Jul 12 00:14:13.162074 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 12 00:14:13.162081 kernel: kvm-clock: using sched offset of 3923362828 cycles Jul 12 00:14:13.162088 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 12 00:14:13.162096 kernel: tsc: Detected 2794.746 MHz processor Jul 12 00:14:13.162103 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 00:14:13.162113 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 00:14:13.162121 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 12 00:14:13.162128 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 12 00:14:13.162135 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 00:14:13.162143 kernel: Using GB pages for direct mapping Jul 12 00:14:13.162150 kernel: ACPI: Early table checksum verification disabled Jul 12 00:14:13.162158 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 12 00:14:13.162165 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162175 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162182 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162190 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 12 00:14:13.162197 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162204 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162212 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162219 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:14:13.162236 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 12 00:14:13.162250 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 12 00:14:13.162257 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 12 00:14:13.162265 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 12 00:14:13.162272 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 12 00:14:13.162280 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 12 00:14:13.162287 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 12 00:14:13.162297 kernel: No NUMA configuration found Jul 12 00:14:13.162304 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 12 00:14:13.162312 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 12 00:14:13.162320 kernel: Zone ranges: Jul 12 00:14:13.162327 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 00:14:13.162335 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 12 00:14:13.162342 kernel: Normal empty Jul 12 00:14:13.162350 kernel: Device empty Jul 12 00:14:13.162357 kernel: Movable zone start for each node Jul 12 00:14:13.162364 kernel: Early memory node ranges Jul 12 00:14:13.162374 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 12 00:14:13.162382 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 12 00:14:13.162389 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 12 00:14:13.162397 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 00:14:13.162404 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 12 00:14:13.162412 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 12 00:14:13.162419 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 12 00:14:13.162430 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 12 00:14:13.162437 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 12 00:14:13.162447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 12 00:14:13.162455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 12 00:14:13.162464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 12 00:14:13.162472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 12 00:14:13.162480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 12 00:14:13.162487 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 00:14:13.162495 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 12 00:14:13.162502 kernel: TSC deadline timer available Jul 12 00:14:13.162510 kernel: CPU topo: Max. logical packages: 1 Jul 12 00:14:13.162520 kernel: CPU topo: Max. logical dies: 1 Jul 12 00:14:13.162527 kernel: CPU topo: Max. dies per package: 1 Jul 12 00:14:13.162535 kernel: CPU topo: Max. threads per core: 1 Jul 12 00:14:13.162542 kernel: CPU topo: Num. cores per package: 4 Jul 12 00:14:13.162550 kernel: CPU topo: Num. threads per package: 4 Jul 12 00:14:13.162557 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 12 00:14:13.162565 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 12 00:14:13.162572 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 12 00:14:13.162580 kernel: kvm-guest: setup PV sched yield Jul 12 00:14:13.162600 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 12 00:14:13.162610 kernel: Booting paravirtualized kernel on KVM Jul 12 00:14:13.162617 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 00:14:13.162625 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 12 00:14:13.162633 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 12 00:14:13.162640 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 12 00:14:13.162648 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 12 00:14:13.162655 kernel: kvm-guest: PV spinlocks enabled Jul 12 00:14:13.162663 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 12 00:14:13.162671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:14:13.162682 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:14:13.162689 kernel: random: crng init done Jul 12 00:14:13.162697 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:14:13.162704 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:14:13.162712 kernel: Fallback order for Node 0: 0 Jul 12 00:14:13.162720 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 12 00:14:13.162727 kernel: Policy zone: DMA32 Jul 12 00:14:13.162735 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:14:13.162745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:14:13.162752 kernel: ftrace: allocating 40095 entries in 157 pages Jul 12 00:14:13.162760 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 00:14:13.162767 kernel: Dynamic Preempt: voluntary Jul 12 00:14:13.162775 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:14:13.162783 kernel: rcu: RCU event tracing is enabled. Jul 12 00:14:13.162791 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:14:13.162798 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:14:13.162808 kernel: Rude variant of Tasks RCU enabled. Jul 12 00:14:13.162830 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:14:13.162848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:14:13.162856 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:14:13.162864 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:14:13.162872 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:14:13.162880 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:14:13.162887 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 12 00:14:13.162895 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:14:13.162913 kernel: Console: colour VGA+ 80x25 Jul 12 00:14:13.162921 kernel: printk: legacy console [ttyS0] enabled Jul 12 00:14:13.162929 kernel: ACPI: Core revision 20240827 Jul 12 00:14:13.162937 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 12 00:14:13.162948 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 00:14:13.162956 kernel: x2apic enabled Jul 12 00:14:13.162967 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 00:14:13.162975 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 12 00:14:13.162983 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 12 00:14:13.162993 kernel: kvm-guest: setup PV IPIs Jul 12 00:14:13.163001 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 00:14:13.163009 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:14:13.163017 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 12 00:14:13.163025 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 12 00:14:13.163032 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 12 00:14:13.163040 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 12 00:14:13.163048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 00:14:13.163056 kernel: Spectre V2 : Mitigation: Retpolines Jul 12 00:14:13.163066 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 12 00:14:13.163074 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 12 00:14:13.163082 kernel: RETBleed: Mitigation: untrained return thunk Jul 12 00:14:13.163090 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 00:14:13.163097 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 00:14:13.163106 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 12 00:14:13.163114 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 12 00:14:13.163122 kernel: x86/bugs: return thunk changed Jul 12 00:14:13.163141 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 12 00:14:13.163157 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 00:14:13.163165 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 00:14:13.163173 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 00:14:13.163181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 00:14:13.163189 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 00:14:13.163197 kernel: Freeing SMP alternatives memory: 32K Jul 12 00:14:13.163208 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:14:13.163216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 00:14:13.163226 kernel: landlock: Up and running. Jul 12 00:14:13.163234 kernel: SELinux: Initializing. Jul 12 00:14:13.163242 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:14:13.163253 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:14:13.163261 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 12 00:14:13.163269 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 12 00:14:13.163277 kernel: ... version: 0 Jul 12 00:14:13.163284 kernel: ... bit width: 48 Jul 12 00:14:13.163292 kernel: ... generic registers: 6 Jul 12 00:14:13.163302 kernel: ... value mask: 0000ffffffffffff Jul 12 00:14:13.163310 kernel: ... max period: 00007fffffffffff Jul 12 00:14:13.163318 kernel: ... fixed-purpose events: 0 Jul 12 00:14:13.163325 kernel: ... event mask: 000000000000003f Jul 12 00:14:13.163333 kernel: signal: max sigframe size: 1776 Jul 12 00:14:13.163341 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:14:13.163349 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:14:13.163357 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 00:14:13.163365 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:14:13.163375 kernel: smpboot: x86: Booting SMP configuration: Jul 12 00:14:13.163383 kernel: .... node #0, CPUs: #1 #2 #3 Jul 12 00:14:13.163390 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:14:13.163398 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 12 00:14:13.163406 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 136904K reserved, 0K cma-reserved) Jul 12 00:14:13.163414 kernel: devtmpfs: initialized Jul 12 00:14:13.163422 kernel: x86/mm: Memory block size: 128MB Jul 12 00:14:13.163430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:14:13.163438 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:14:13.163448 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:14:13.163456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:14:13.163464 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:14:13.163472 kernel: audit: type=2000 audit(1752279249.636:1): state=initialized audit_enabled=0 res=1 Jul 12 00:14:13.163480 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:14:13.163487 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 00:14:13.163495 kernel: cpuidle: using governor menu Jul 12 00:14:13.163503 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:14:13.163511 kernel: dca service started, version 1.12.1 Jul 12 00:14:13.163521 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 12 00:14:13.163528 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 12 00:14:13.163536 kernel: PCI: Using configuration type 1 for base access Jul 12 00:14:13.163544 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 00:14:13.163552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:14:13.163560 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:14:13.163568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:14:13.163576 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:14:13.163592 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:14:13.163602 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:14:13.163610 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:14:13.163618 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:14:13.163625 kernel: ACPI: Interpreter enabled Jul 12 00:14:13.163634 kernel: ACPI: PM: (supports S0 S3 S5) Jul 12 00:14:13.163642 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 00:14:13.163650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 00:14:13.163657 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 00:14:13.163665 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 12 00:14:13.163675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:14:13.163869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:14:13.163994 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 12 00:14:13.164109 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 12 00:14:13.164120 kernel: PCI host bridge to bus 0000:00 Jul 12 00:14:13.164238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 00:14:13.164344 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 12 00:14:13.164458 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 00:14:13.164564 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 12 00:14:13.164684 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 12 00:14:13.164790 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 12 00:14:13.164916 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:14:13.165053 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 12 00:14:13.165186 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 12 00:14:13.165304 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 12 00:14:13.165419 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 12 00:14:13.165537 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 12 00:14:13.165670 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 00:14:13.165864 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 00:14:13.165988 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 12 00:14:13.166109 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 12 00:14:13.166224 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 12 00:14:13.166349 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 12 00:14:13.166467 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 12 00:14:13.166593 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 12 00:14:13.166712 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 12 00:14:13.166876 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 12 00:14:13.167006 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 12 00:14:13.167124 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 12 00:14:13.167241 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 12 00:14:13.167358 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 12 00:14:13.167482 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 12 00:14:13.167699 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 12 00:14:13.167859 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 12 00:14:13.167982 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 12 00:14:13.168099 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 12 00:14:13.168224 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 12 00:14:13.168342 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 12 00:14:13.168353 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 12 00:14:13.168361 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 12 00:14:13.168373 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 00:14:13.168381 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 12 00:14:13.168389 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 12 00:14:13.168397 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 12 00:14:13.168405 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 12 00:14:13.168413 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 12 00:14:13.168422 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 12 00:14:13.168430 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 12 00:14:13.168438 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 12 00:14:13.168448 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 12 00:14:13.168456 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 12 00:14:13.168464 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 12 00:14:13.168472 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 12 00:14:13.168480 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 12 00:14:13.168488 kernel: iommu: Default domain type: Translated Jul 12 00:14:13.168496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 00:14:13.168504 kernel: PCI: Using ACPI for IRQ routing Jul 12 00:14:13.168511 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 00:14:13.168519 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 12 00:14:13.168529 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 12 00:14:13.168658 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 12 00:14:13.168774 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 12 00:14:13.168909 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 00:14:13.168921 kernel: vgaarb: loaded Jul 12 00:14:13.168930 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 12 00:14:13.168938 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 12 00:14:13.168946 kernel: clocksource: Switched to clocksource kvm-clock Jul 12 00:14:13.168957 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:14:13.168966 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:14:13.168974 kernel: pnp: PnP ACPI init Jul 12 00:14:13.169100 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 12 00:14:13.169112 kernel: pnp: PnP ACPI: found 6 devices Jul 12 00:14:13.169120 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 00:14:13.169128 kernel: NET: Registered PF_INET protocol family Jul 12 00:14:13.169137 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:14:13.169148 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:14:13.169156 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:14:13.169164 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:14:13.169172 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:14:13.169180 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:14:13.169188 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:14:13.169196 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:14:13.169205 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:14:13.169213 kernel: NET: Registered PF_XDP protocol family Jul 12 00:14:13.169322 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 12 00:14:13.169428 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 12 00:14:13.169534 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 12 00:14:13.169656 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 12 00:14:13.169763 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 12 00:14:13.169886 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 12 00:14:13.169898 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:14:13.169906 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:14:13.169918 kernel: Initialise system trusted keyrings Jul 12 00:14:13.169926 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:14:13.169934 kernel: Key type asymmetric registered Jul 12 00:14:13.169942 kernel: Asymmetric key parser 'x509' registered Jul 12 00:14:13.169950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:14:13.169958 kernel: io scheduler mq-deadline registered Jul 12 00:14:13.169966 kernel: io scheduler kyber registered Jul 12 00:14:13.169973 kernel: io scheduler bfq registered Jul 12 00:14:13.169981 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 00:14:13.169992 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 12 00:14:13.170000 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 12 00:14:13.170008 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 12 00:14:13.170016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:14:13.170024 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 00:14:13.170033 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 12 00:14:13.170041 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 00:14:13.170048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 00:14:13.170056 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 00:14:13.170211 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 12 00:14:13.170335 kernel: rtc_cmos 00:04: registered as rtc0 Jul 12 00:14:13.170454 kernel: rtc_cmos 00:04: setting system clock to 2025-07-12T00:14:12 UTC (1752279252) Jul 12 00:14:13.170594 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 12 00:14:13.170607 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 12 00:14:13.170615 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:14:13.170624 kernel: Segment Routing with IPv6 Jul 12 00:14:13.170634 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:14:13.170646 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:14:13.170655 kernel: Key type dns_resolver registered Jul 12 00:14:13.170663 kernel: IPI shorthand broadcast: enabled Jul 12 00:14:13.170671 kernel: sched_clock: Marking stable (3375002640, 239580703)->(3901018810, -286435467) Jul 12 00:14:13.170679 kernel: hpet: Lost 2 RTC interrupts Jul 12 00:14:13.170687 kernel: registered taskstats version 1 Jul 12 00:14:13.170695 kernel: Loading compiled-in X.509 certificates Jul 12 00:14:13.170703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f8f9174ae27e6261b0ae25e5f0210210a376c8b8' Jul 12 00:14:13.170711 kernel: Demotion targets for Node 0: null Jul 12 00:14:13.170721 kernel: Key type .fscrypt registered Jul 12 00:14:13.170729 kernel: Key type fscrypt-provisioning registered Jul 12 00:14:13.170737 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:14:13.170745 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:14:13.170753 kernel: ima: No architecture policies found Jul 12 00:14:13.170762 kernel: clk: Disabling unused clocks Jul 12 00:14:13.170772 kernel: Warning: unable to open an initial console. Jul 12 00:14:13.170781 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 12 00:14:13.170789 kernel: Write protecting the kernel read-only data: 24576k Jul 12 00:14:13.170801 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 00:14:13.170809 kernel: Run /init as init process Jul 12 00:14:13.170830 kernel: with arguments: Jul 12 00:14:13.170839 kernel: /init Jul 12 00:14:13.170849 kernel: with environment: Jul 12 00:14:13.170857 kernel: HOME=/ Jul 12 00:14:13.170865 kernel: TERM=linux Jul 12 00:14:13.170872 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:14:13.170883 systemd[1]: Successfully made /usr/ read-only. Jul 12 00:14:13.170907 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:14:13.170918 systemd[1]: Detected virtualization kvm. Jul 12 00:14:13.170927 systemd[1]: Detected architecture x86-64. Jul 12 00:14:13.170935 systemd[1]: Running in initrd. Jul 12 00:14:13.170946 systemd[1]: No hostname configured, using default hostname. Jul 12 00:14:13.170958 systemd[1]: Hostname set to . Jul 12 00:14:13.170966 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:14:13.170977 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:14:13.170986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:13.170995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:13.171004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:14:13.171013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:14:13.171022 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:14:13.171036 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:14:13.171047 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:14:13.171056 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:14:13.171067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:13.171076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:13.171084 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:14:13.171093 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:14:13.171104 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:14:13.171113 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:14:13.171122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:14:13.171133 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:14:13.171142 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:14:13.171151 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 00:14:13.171160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:13.171169 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:13.171182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:13.171192 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:14:13.171201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:14:13.171210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:14:13.171219 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:14:13.171230 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 00:14:13.171243 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:14:13.171252 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:14:13.171261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:14:13.171272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:13.171281 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:14:13.171293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:13.171301 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:14:13.171331 systemd-journald[220]: Collecting audit messages is disabled. Jul 12 00:14:13.171354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:14:13.171364 systemd-journald[220]: Journal started Jul 12 00:14:13.171384 systemd-journald[220]: Runtime Journal (/run/log/journal/7af8dbda38e744a38c9f3b4efce89099) is 6M, max 48.6M, 42.5M free. Jul 12 00:14:13.160085 systemd-modules-load[221]: Inserted module 'overlay' Jul 12 00:14:13.314757 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:14:13.314790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:14:13.314838 kernel: Bridge firewalling registered Jul 12 00:14:13.219468 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 12 00:14:13.318177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:13.320674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:13.359606 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:14:13.365803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:14:13.369251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:14:13.399279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:14:13.404664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:14:13.415156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:13.416365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:13.419860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:14:13.420262 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 00:14:13.422212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:14:13.466193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:13.482679 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:14:13.521381 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:14:13.566309 systemd-resolved[262]: Positive Trust Anchors: Jul 12 00:14:13.566323 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:14:13.566359 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:14:13.569139 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 12 00:14:13.570335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:14:13.573260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:13.608859 kernel: SCSI subsystem initialized Jul 12 00:14:13.618867 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:14:13.629852 kernel: iscsi: registered transport (tcp) Jul 12 00:14:13.651981 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:14:13.652024 kernel: QLogic iSCSI HBA Driver Jul 12 00:14:13.674659 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:14:13.716972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:13.768705 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:14:13.830517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:14:13.833183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:14:13.895868 kernel: raid6: avx2x4 gen() 28729 MB/s Jul 12 00:14:13.917860 kernel: raid6: avx2x2 gen() 30219 MB/s Jul 12 00:14:13.934897 kernel: raid6: avx2x1 gen() 23115 MB/s Jul 12 00:14:13.934945 kernel: raid6: using algorithm avx2x2 gen() 30219 MB/s Jul 12 00:14:13.953129 kernel: raid6: .... xor() 17956 MB/s, rmw enabled Jul 12 00:14:13.953202 kernel: raid6: using avx2x2 recovery algorithm Jul 12 00:14:13.975872 kernel: xor: automatically using best checksumming function avx Jul 12 00:14:14.200869 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:14:14.209531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:14:14.213965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:14.249928 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 12 00:14:14.264498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:14.267977 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:14:14.302052 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 12 00:14:14.340021 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:14:14.343547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:14:14.425777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:14.436908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:14:14.515847 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:14:14.521851 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 12 00:14:14.522640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:14:14.522769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:14.526672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:14.531853 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:14:14.532644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:14.539200 kernel: libata version 3.00 loaded. Jul 12 00:14:14.535911 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:14:14.545857 kernel: AES CTR mode by8 optimization enabled Jul 12 00:14:14.549925 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 12 00:14:14.551838 kernel: ahci 0000:00:1f.2: version 3.0 Jul 12 00:14:14.552020 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 12 00:14:14.555186 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 12 00:14:14.555363 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 12 00:14:14.555521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:14:14.555539 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 12 00:14:14.556932 kernel: GPT:9289727 != 19775487 Jul 12 00:14:14.559904 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:14:14.559980 kernel: GPT:9289727 != 19775487 Jul 12 00:14:14.559995 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:14:14.560008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:14.627188 kernel: scsi host0: ahci Jul 12 00:14:14.627553 kernel: scsi host1: ahci Jul 12 00:14:14.649875 kernel: scsi host2: ahci Jul 12 00:14:14.650253 kernel: scsi host3: ahci Jul 12 00:14:14.650858 kernel: scsi host4: ahci Jul 12 00:14:14.651905 kernel: scsi host5: ahci Jul 12 00:14:14.652079 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 12 00:14:14.652092 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 12 00:14:14.652103 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 12 00:14:14.652124 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 12 00:14:14.652134 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 12 00:14:14.652144 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 12 00:14:14.694387 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:14:14.735086 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:14:14.735842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:14.752121 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:14:14.752445 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:14:14.762350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:14:14.763805 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:14:14.971886 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 12 00:14:14.971980 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 12 00:14:14.972907 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 12 00:14:14.974062 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 12 00:14:14.974085 kernel: ata3.00: applying bridge limits Jul 12 00:14:14.974880 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 12 00:14:14.975857 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 12 00:14:14.976859 kernel: ata3.00: configured for UDMA/100 Jul 12 00:14:14.976887 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 12 00:14:14.977858 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:14:15.030889 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 12 00:14:15.031235 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:14:15.033421 disk-uuid[633]: Primary Header is updated. Jul 12 00:14:15.033421 disk-uuid[633]: Secondary Entries is updated. Jul 12 00:14:15.033421 disk-uuid[633]: Secondary Header is updated. Jul 12 00:14:15.037398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:15.040865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:15.042843 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:14:15.435285 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:14:15.438249 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:14:15.441081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:15.443643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:14:15.446999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:14:15.478123 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:14:16.045757 disk-uuid[634]: The operation has completed successfully. Jul 12 00:14:16.047393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:14:16.081029 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:14:16.081166 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:14:16.127922 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:14:16.158029 sh[663]: Success Jul 12 00:14:16.177967 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:14:16.178045 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:14:16.179050 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 00:14:16.188856 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 12 00:14:16.229888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:14:16.232385 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:14:16.250971 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:14:16.259026 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 00:14:16.259065 kernel: BTRFS: device fsid bb55a55d-83fd-4659-93e1-1a7697cb01ff devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (675) Jul 12 00:14:16.259845 kernel: BTRFS info (device dm-0): first mount of filesystem bb55a55d-83fd-4659-93e1-1a7697cb01ff Jul 12 00:14:16.261434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:14:16.261457 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 00:14:16.267644 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:14:16.268701 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:14:16.269780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:14:16.271804 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:14:16.275272 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:14:16.307864 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Jul 12 00:14:16.307945 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:14:16.309457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:14:16.309491 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:16.318869 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:14:16.320293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:14:16.324515 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:14:16.418120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:14:16.423523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:14:16.429309 ignition[755]: Ignition 2.21.0 Jul 12 00:14:16.429326 ignition[755]: Stage: fetch-offline Jul 12 00:14:16.429370 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:16.429382 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:16.429515 ignition[755]: parsed url from cmdline: "" Jul 12 00:14:16.429520 ignition[755]: no config URL provided Jul 12 00:14:16.429526 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:14:16.429537 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:14:16.429564 ignition[755]: op(1): [started] loading QEMU firmware config module Jul 12 00:14:16.429570 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:14:16.440902 ignition[755]: op(1): [finished] loading QEMU firmware config module Jul 12 00:14:16.481564 ignition[755]: parsing config with SHA512: f410b49f91c839548d61b21da85e74bec9a3c953cc5ab5c9cd4a2235eed9b103094ba95829437adca71782ce79a4bd08fc3fde3f576daf8d5459acf43a1ed8d7 Jul 12 00:14:16.489102 unknown[755]: fetched base config from "system" Jul 12 00:14:16.489120 unknown[755]: fetched user config from "qemu" Jul 12 00:14:16.489595 ignition[755]: fetch-offline: fetch-offline passed Jul 12 00:14:16.489676 ignition[755]: Ignition finished successfully Jul 12 00:14:16.493576 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:14:16.497778 systemd-networkd[851]: lo: Link UP Jul 12 00:14:16.497789 systemd-networkd[851]: lo: Gained carrier Jul 12 00:14:16.499694 systemd-networkd[851]: Enumeration completed Jul 12 00:14:16.500193 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:16.500199 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:14:16.500925 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:14:16.501557 systemd-networkd[851]: eth0: Link UP Jul 12 00:14:16.501561 systemd-networkd[851]: eth0: Gained carrier Jul 12 00:14:16.501571 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:16.502625 systemd[1]: Reached target network.target - Network. Jul 12 00:14:16.503234 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:14:16.504381 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:14:16.534886 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:14:16.550766 ignition[857]: Ignition 2.21.0 Jul 12 00:14:16.550782 ignition[857]: Stage: kargs Jul 12 00:14:16.550934 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:16.550945 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:16.552396 ignition[857]: kargs: kargs passed Jul 12 00:14:16.552454 ignition[857]: Ignition finished successfully Jul 12 00:14:16.558037 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:14:16.560518 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:14:16.603257 ignition[865]: Ignition 2.21.0 Jul 12 00:14:16.603272 ignition[865]: Stage: disks Jul 12 00:14:16.603418 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:16.603428 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:16.606848 ignition[865]: disks: disks passed Jul 12 00:14:16.606954 ignition[865]: Ignition finished successfully Jul 12 00:14:16.610896 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:14:16.611375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:14:16.613243 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:14:16.615666 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:14:16.618320 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:14:16.620656 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:14:16.624475 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:14:16.662187 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 00:14:16.746606 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:14:16.752934 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:14:16.883877 kernel: EXT4-fs (vda9): mounted filesystem 0ad89691-b65b-416c-92a9-d1ab167398bb r/w with ordered data mode. Quota mode: none. Jul 12 00:14:16.884809 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:14:16.885804 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:14:16.892899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:14:16.897609 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:14:16.898488 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:14:16.898542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:14:16.898570 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:14:16.919406 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:14:16.922205 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:14:16.926014 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 12 00:14:16.927860 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:14:16.927890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:14:16.929297 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:16.934871 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:14:16.986008 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:14:16.992798 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:14:16.999126 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:14:17.004439 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:14:17.134489 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:14:17.140769 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:14:17.144344 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:14:17.177529 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:14:17.200130 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:14:17.218180 ignition[997]: INFO : Ignition 2.21.0 Jul 12 00:14:17.218180 ignition[997]: INFO : Stage: mount Jul 12 00:14:17.220660 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:17.220660 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:17.220660 ignition[997]: INFO : mount: mount passed Jul 12 00:14:17.220660 ignition[997]: INFO : Ignition finished successfully Jul 12 00:14:17.222773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:14:17.226279 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:14:17.258463 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:14:17.262380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:14:17.287035 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Jul 12 00:14:17.287094 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:14:17.287109 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:14:17.288139 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:14:17.293780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:14:17.325256 ignition[1027]: INFO : Ignition 2.21.0 Jul 12 00:14:17.325256 ignition[1027]: INFO : Stage: files Jul 12 00:14:17.327340 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:17.327340 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:17.330408 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:14:17.331847 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:14:17.331847 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:14:17.336724 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:14:17.338338 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:14:17.340312 unknown[1027]: wrote ssh authorized keys file for user: core Jul 12 00:14:17.341942 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:14:17.343977 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 00:14:17.346290 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 12 00:14:17.409664 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:14:17.934600 systemd-networkd[851]: eth0: Gained IPv6LL Jul 12 00:14:17.984114 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 00:14:17.984114 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:14:17.988730 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:14:17.990743 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:14:17.992959 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:14:17.994975 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:14:17.997110 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:14:17.999133 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:14:18.001296 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:14:18.008698 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:14:18.011236 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:14:18.013353 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:14:18.016096 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:14:18.016096 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:14:18.016096 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 12 00:14:18.593706 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:14:19.150214 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:14:19.150214 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:14:19.154599 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:14:19.161398 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:14:19.161398 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:14:19.161398 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:14:19.166187 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:14:19.169536 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:14:19.169536 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:14:19.169536 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:14:19.285989 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:14:19.294786 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:14:19.296775 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:14:19.298269 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:14:19.299975 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:14:19.301925 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:14:19.304042 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:14:19.305980 ignition[1027]: INFO : files: files passed Jul 12 00:14:19.306814 ignition[1027]: INFO : Ignition finished successfully Jul 12 00:14:19.311854 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:14:19.314765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:14:19.315954 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:14:19.331507 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:14:19.331680 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:14:19.335538 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:14:19.340203 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:19.342170 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:19.343833 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:14:19.346568 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:14:19.349454 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:14:19.353037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:14:19.443792 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:14:19.443945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:14:19.464753 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:14:19.466974 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:14:19.469251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:14:19.472050 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:14:19.516617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:14:19.518841 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:14:19.543773 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:19.552226 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:19.552892 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:14:19.553594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:14:19.553757 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:14:19.558448 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:14:19.559164 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:14:19.559533 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:14:19.559881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:14:19.560388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:14:19.560709 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:14:19.561228 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:14:19.561537 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:14:19.561899 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:14:19.562391 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:14:19.562731 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:14:19.563047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:14:19.563193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:14:19.581482 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:19.581880 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:19.582294 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:14:19.587603 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:19.588105 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:14:19.588234 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:14:19.590539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:14:19.590647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:14:19.593384 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:14:19.595303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:14:19.599893 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:19.602674 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:14:19.603152 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:14:19.603499 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:14:19.603612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:14:19.606443 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:14:19.606536 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:14:19.608258 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:14:19.608377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:14:19.609934 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:14:19.610055 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:14:19.611456 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:14:19.614039 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:14:19.614153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:19.615208 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:14:19.618683 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:14:19.618796 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:19.619257 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:14:19.619355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:14:19.629838 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:14:19.629969 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:14:19.912227 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:14:19.920097 ignition[1082]: INFO : Ignition 2.21.0 Jul 12 00:14:19.920097 ignition[1082]: INFO : Stage: umount Jul 12 00:14:19.922051 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:14:19.922051 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:14:19.922051 ignition[1082]: INFO : umount: umount passed Jul 12 00:14:19.922051 ignition[1082]: INFO : Ignition finished successfully Jul 12 00:14:19.929243 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:14:19.929403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:14:19.930371 systemd[1]: Stopped target network.target - Network. Jul 12 00:14:19.934122 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:14:19.934234 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:14:19.935249 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:14:19.935316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:14:19.935605 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:14:19.935698 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:14:19.936283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:14:19.936339 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:14:19.936977 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:14:19.945212 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:14:19.958633 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:14:19.958866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:14:19.963686 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 00:14:19.964294 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:14:19.964496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:14:19.968410 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 00:14:19.969275 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 00:14:20.032069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:14:20.032165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:20.036626 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:14:20.037079 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:14:20.037159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:14:20.037651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:14:20.037695 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:20.043902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:14:20.043953 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:20.045162 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:14:20.045210 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:20.048324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:20.051629 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:14:20.051693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:14:20.068941 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:14:20.069092 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:14:20.077607 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:14:20.077837 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:20.078585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:14:20.078641 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:20.083008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:14:20.083048 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:20.085173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:14:20.085234 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:14:20.086058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:14:20.086119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:14:20.086736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:14:20.086797 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:14:20.096058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:14:20.096605 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 00:14:20.096666 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:20.100655 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:14:20.100703 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:20.105906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:14:20.105960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:20.110509 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 12 00:14:20.110576 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 00:14:20.110624 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:14:20.128131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:14:20.128258 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:14:20.212885 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:14:20.213018 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:14:20.215291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:14:20.215635 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:14:20.215689 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:14:20.220462 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:14:20.245303 systemd[1]: Switching root. Jul 12 00:14:20.279158 systemd-journald[220]: Journal stopped Jul 12 00:14:23.099436 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 12 00:14:23.099537 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:14:23.099555 kernel: SELinux: policy capability open_perms=1 Jul 12 00:14:23.099571 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:14:23.099591 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:14:23.099605 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:14:23.099619 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:14:23.099633 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:14:23.099655 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:14:23.099670 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 00:14:23.099684 kernel: audit: type=1403 audit(1752279261.487:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:14:23.099705 systemd[1]: Successfully loaded SELinux policy in 104.229ms. Jul 12 00:14:23.099734 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.928ms. Jul 12 00:14:23.099751 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:14:23.099768 systemd[1]: Detected virtualization kvm. Jul 12 00:14:23.099783 systemd[1]: Detected architecture x86-64. Jul 12 00:14:23.099806 systemd[1]: Detected first boot. Jul 12 00:14:23.099852 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:14:23.099869 zram_generator::config[1127]: No configuration found. Jul 12 00:14:23.099897 kernel: Guest personality initialized and is inactive Jul 12 00:14:23.099927 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 00:14:23.099955 kernel: Initialized host personality Jul 12 00:14:23.099970 kernel: NET: Registered PF_VSOCK protocol family Jul 12 00:14:23.099985 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:14:23.100002 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 00:14:23.100026 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:14:23.100041 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:14:23.100057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:14:23.100073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:14:23.100089 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:14:23.100108 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:14:23.100123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:14:23.100138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:14:23.100154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:14:23.100174 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:14:23.100190 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:14:23.100206 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:14:23.100221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:14:23.100237 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:14:23.100252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:14:23.100269 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:14:23.100289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:14:23.100317 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:14:23.100334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:14:23.100349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:14:23.100365 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:14:23.100381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:14:23.100397 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:14:23.100413 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:14:23.100428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:14:23.100447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:14:23.100465 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:14:23.100486 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:14:23.100502 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:14:23.100518 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:14:23.100535 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 00:14:23.100554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:14:23.100570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:14:23.100586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:14:23.100602 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:14:23.100621 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:14:23.100637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:14:23.100652 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:14:23.100667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:23.100683 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:14:23.102858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:14:23.102887 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:14:23.102904 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:14:23.102925 systemd[1]: Reached target machines.target - Containers. Jul 12 00:14:23.102941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:14:23.102957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:23.102972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:14:23.102988 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:14:23.103003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:23.103018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:14:23.103033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:23.103051 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:14:23.103068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:23.103084 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:14:23.103100 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:14:23.103116 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:14:23.103132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:14:23.103150 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:14:23.103167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:23.103187 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:14:23.103203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:14:23.103219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:14:23.103233 kernel: fuse: init (API version 7.41) Jul 12 00:14:23.103311 systemd-journald[1191]: Collecting audit messages is disabled. Jul 12 00:14:23.103346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:14:23.103363 systemd-journald[1191]: Journal started Jul 12 00:14:23.103398 systemd-journald[1191]: Runtime Journal (/run/log/journal/7af8dbda38e744a38c9f3b4efce89099) is 6M, max 48.6M, 42.5M free. Jul 12 00:14:22.405189 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:14:23.107460 kernel: loop: module loaded Jul 12 00:14:23.107519 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 00:14:22.428668 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:14:22.429220 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:14:23.117536 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:14:23.117581 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:14:23.117596 systemd[1]: Stopped verity-setup.service. Jul 12 00:14:23.123471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:23.128846 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:14:23.130781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:14:23.132212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:14:23.134165 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:14:23.137029 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:14:23.138394 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:14:23.141065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:14:23.142657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:14:23.144604 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:14:23.145000 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:14:23.146788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:23.147211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:23.149025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:23.149384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:23.151239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:14:23.151547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:14:23.153217 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:23.153524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:23.155266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:14:23.157115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:14:23.159005 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:14:23.160924 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 00:14:23.213339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:14:23.218848 kernel: ACPI: bus type drm_connector registered Jul 12 00:14:23.219045 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:14:23.219286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:14:23.229997 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:14:23.286067 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:14:23.288283 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:14:23.289358 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:14:23.289384 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:14:23.290724 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 00:14:23.300991 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:14:23.302705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:23.305249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:14:23.308188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:14:23.310915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:14:23.312365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:14:23.313795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:14:23.316975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:14:23.326044 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:14:23.363484 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:14:23.365359 systemd-journald[1191]: Time spent on flushing to /var/log/journal/7af8dbda38e744a38c9f3b4efce89099 is 22.616ms for 978 entries. Jul 12 00:14:23.365359 systemd-journald[1191]: System Journal (/var/log/journal/7af8dbda38e744a38c9f3b4efce89099) is 8M, max 195.6M, 187.6M free. Jul 12 00:14:24.199339 systemd-journald[1191]: Received client request to flush runtime journal. Jul 12 00:14:24.199380 kernel: loop0: detected capacity change from 0 to 229808 Jul 12 00:14:24.199394 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:14:24.199407 kernel: loop1: detected capacity change from 0 to 146240 Jul 12 00:14:24.199420 kernel: loop2: detected capacity change from 0 to 113872 Jul 12 00:14:23.366245 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:14:23.503967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:14:24.250045 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:14:24.260439 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:14:24.262459 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:14:24.268315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:14:24.365861 kernel: loop3: detected capacity change from 0 to 229808 Jul 12 00:14:24.370502 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 00:14:24.376196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:14:24.526858 kernel: loop4: detected capacity change from 0 to 146240 Jul 12 00:14:24.655847 kernel: loop5: detected capacity change from 0 to 113872 Jul 12 00:14:24.670747 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:14:24.671440 (sd-merge)[1262]: Merged extensions into '/usr'. Jul 12 00:14:24.696615 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:14:24.696841 systemd[1]: Reloading... Jul 12 00:14:24.796889 zram_generator::config[1295]: No configuration found. Jul 12 00:14:24.920974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:25.014335 systemd[1]: Reloading finished in 316 ms. Jul 12 00:14:25.048345 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:14:25.078721 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:14:25.078188 systemd[1]: Starting ensure-sysext.service... Jul 12 00:14:25.104723 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:14:25.104743 systemd[1]: Reloading... Jul 12 00:14:25.169851 zram_generator::config[1357]: No configuration found. Jul 12 00:14:25.276115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:14:25.359027 systemd[1]: Reloading finished in 253 ms. Jul 12 00:14:25.385086 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:14:25.442096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:14:25.472366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:14:25.474448 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:14:25.480032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.480266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:25.490164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:14:25.497562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:14:25.500357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:14:25.501850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:25.501979 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:25.502118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.508925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.509130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:25.509346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:25.509450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:25.509576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.514336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:14:25.514657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:14:25.542205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:14:25.542455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:14:25.544483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:14:25.544750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:14:25.580626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.580914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:14:25.582497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:14:25.613557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:14:25.613601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:14:25.613671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:14:25.613731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:14:25.613791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:14:25.629157 systemd[1]: Finished ensure-sysext.service. Jul 12 00:14:25.663497 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:14:25.663732 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:14:25.794766 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 00:14:25.794837 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 00:14:25.795157 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:14:25.795586 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:14:25.796868 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:14:25.796941 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jul 12 00:14:25.796957 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jul 12 00:14:25.797206 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jul 12 00:14:25.797311 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jul 12 00:14:25.801647 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:14:25.801662 systemd-tmpfiles[1396]: Skipping /boot Jul 12 00:14:25.806025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:14:25.815343 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:14:25.815355 systemd-tmpfiles[1396]: Skipping /boot Jul 12 00:14:26.007733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:14:26.018978 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:14:26.052693 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:14:26.062985 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:14:26.074406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:14:26.109860 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:14:26.112318 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:14:26.115857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:14:26.192733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:14:26.200653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:14:26.254223 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:14:26.259587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:14:26.260581 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 00:14:26.303756 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:14:26.309274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:14:26.312423 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:14:26.373247 systemd-udevd[1445]: Using default interface naming scheme 'v255'. Jul 12 00:14:26.403504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:14:26.439584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:14:26.450170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:14:26.500665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:14:26.527070 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:14:26.545628 augenrules[1486]: No rules Jul 12 00:14:26.547859 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:14:26.548223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:14:26.619371 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:14:26.630287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:14:26.638847 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:14:26.787928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:14:26.789649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 12 00:14:26.853854 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:14:26.914872 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 12 00:14:26.919848 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 12 00:14:26.957297 kernel: ACPI: button: Power Button [PWRF] Jul 12 00:14:26.989948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:14:26.990748 systemd-networkd[1476]: lo: Link UP Jul 12 00:14:26.990753 systemd-networkd[1476]: lo: Gained carrier Jul 12 00:14:26.994869 systemd-networkd[1476]: Enumeration completed Jul 12 00:14:26.995023 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:14:26.997858 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:26.997870 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:14:26.998627 systemd-networkd[1476]: eth0: Link UP Jul 12 00:14:26.998811 systemd-networkd[1476]: eth0: Gained carrier Jul 12 00:14:26.998843 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:14:27.001188 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 00:14:27.033906 kernel: kvm_amd: TSC scaling supported Jul 12 00:14:27.033958 kernel: kvm_amd: Nested Virtualization enabled Jul 12 00:14:27.033972 kernel: kvm_amd: Nested Paging enabled Jul 12 00:14:27.033984 kernel: kvm_amd: LBR virtualization supported Jul 12 00:14:27.057694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:14:27.059181 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 12 00:14:27.059231 kernel: kvm_amd: Virtual GIF supported Jul 12 00:14:27.062965 systemd-networkd[1476]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:14:27.071418 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:14:27.986692 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:14:27.986772 systemd-timesyncd[1432]: Initial clock synchronization to Sat 2025-07-12 00:14:27.986574 UTC. Jul 12 00:14:28.036003 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:14:28.049927 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 00:14:28.090237 systemd-resolved[1422]: Positive Trust Anchors: Jul 12 00:14:28.090253 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:14:28.090285 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:14:28.093795 systemd-resolved[1422]: Defaulting to hostname 'linux'. Jul 12 00:14:28.095806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:14:28.140717 systemd[1]: Reached target network.target - Network. Jul 12 00:14:28.141010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:14:28.205297 kernel: EDAC MC: Ver: 3.0.0 Jul 12 00:14:28.218687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:14:28.279623 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:14:28.280904 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:14:28.282273 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:14:28.283651 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 00:14:28.285127 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:14:28.286437 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:14:28.343655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:14:28.345041 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:14:28.345094 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:14:28.346092 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:14:28.348739 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:14:28.351535 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:14:28.412300 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 00:14:28.413788 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 00:14:28.415100 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 00:14:28.418779 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:14:28.420210 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 00:14:28.422013 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:14:28.423875 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:14:28.424929 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:14:28.426022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:14:28.426049 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:14:28.427033 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:14:28.429292 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:14:28.431430 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:14:28.435045 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:14:28.467102 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:14:28.468769 jq[1532]: false Jul 12 00:14:28.468211 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:14:28.469542 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 00:14:28.472256 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:14:28.474927 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:14:28.477081 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:14:28.481218 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:14:28.484578 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 12 00:14:28.491912 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Jul 12 00:14:28.497131 extend-filesystems[1533]: Found /dev/vda6 Jul 12 00:14:28.498654 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 12 00:14:28.498640 oslogin_cache_refresh[1534]: Failure getting users, quitting Jul 12 00:14:28.499000 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:14:28.499000 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 12 00:14:28.498662 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:14:28.498706 oslogin_cache_refresh[1534]: Refreshing group entry cache Jul 12 00:14:28.502037 extend-filesystems[1533]: Found /dev/vda9 Jul 12 00:14:28.503654 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:14:28.507382 extend-filesystems[1533]: Checking size of /dev/vda9 Jul 12 00:14:28.507233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:14:28.504583 oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 12 00:14:28.531899 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Jul 12 00:14:28.531899 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:14:28.504599 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:14:28.535440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:14:28.537209 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:14:28.545185 extend-filesystems[1533]: Resized partition /dev/vda9 Jul 12 00:14:28.559737 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:14:28.565474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:14:28.567378 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:14:28.567664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:14:28.568002 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 00:14:28.571870 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 00:14:28.591258 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:14:28.591630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:14:28.593783 jq[1559]: true Jul 12 00:14:28.594347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:14:28.594656 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:14:28.617229 extend-filesystems[1558]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 00:14:28.627463 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:14:28.640790 jq[1564]: true Jul 12 00:14:28.647994 update_engine[1555]: I20250712 00:14:28.647657 1555 main.cc:92] Flatcar Update Engine starting Jul 12 00:14:28.650312 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 00:14:28.650357 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 00:14:28.650684 systemd-logind[1543]: New seat seat0. Jul 12 00:14:28.652103 tar[1563]: linux-amd64/LICENSE Jul 12 00:14:28.652364 tar[1563]: linux-amd64/helm Jul 12 00:14:28.657082 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:14:28.718021 dbus-daemon[1530]: [system] SELinux support is enabled Jul 12 00:14:28.718586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:14:28.723084 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:14:28.724438 update_engine[1555]: I20250712 00:14:28.724231 1555 update_check_scheduler.cc:74] Next update check in 9m18s Jul 12 00:14:28.723116 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:14:28.724650 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:14:28.724732 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:14:28.727378 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:14:28.727570 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:14:28.731050 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:14:28.733236 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:14:28.855483 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:14:28.885951 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:14:28.908126 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:14:28.939503 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:14:28.939503 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:14:28.939503 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:14:28.942818 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Jul 12 00:14:28.941652 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:14:28.941960 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:14:28.947798 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:14:28.950427 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:14:28.954537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:14:28.990219 systemd-networkd[1476]: eth0: Gained IPv6LL Jul 12 00:14:28.994093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:14:28.997956 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:14:29.002124 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:14:29.008332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:29.011629 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:14:29.058028 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:14:29.062676 containerd[1565]: time="2025-07-12T00:14:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 00:14:29.064441 containerd[1565]: time="2025-07-12T00:14:29.064404326Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 12 00:14:29.066861 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:14:29.067229 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:14:29.071955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:14:29.080090 containerd[1565]: time="2025-07-12T00:14:29.080039299Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.679µs" Jul 12 00:14:29.080202 containerd[1565]: time="2025-07-12T00:14:29.080085906Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 00:14:29.080202 containerd[1565]: time="2025-07-12T00:14:29.080145277Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 00:14:29.080387 containerd[1565]: time="2025-07-12T00:14:29.080358598Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 00:14:29.080415 containerd[1565]: time="2025-07-12T00:14:29.080388654Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 00:14:29.080434 containerd[1565]: time="2025-07-12T00:14:29.080419873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:14:29.080522 containerd[1565]: time="2025-07-12T00:14:29.080495925Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:14:29.080522 containerd[1565]: time="2025-07-12T00:14:29.080517375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:14:29.080683 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:14:29.081062 containerd[1565]: time="2025-07-12T00:14:29.080869406Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081062 containerd[1565]: time="2025-07-12T00:14:29.080885927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081062 containerd[1565]: time="2025-07-12T00:14:29.080898130Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081062 containerd[1565]: time="2025-07-12T00:14:29.080908429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081062 containerd[1565]: time="2025-07-12T00:14:29.081035508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081632 containerd[1565]: time="2025-07-12T00:14:29.081407786Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081632 containerd[1565]: time="2025-07-12T00:14:29.081449284Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:14:29.081632 containerd[1565]: time="2025-07-12T00:14:29.081462388Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 00:14:29.081632 containerd[1565]: time="2025-07-12T00:14:29.081508976Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 00:14:29.081916 containerd[1565]: time="2025-07-12T00:14:29.081868260Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 00:14:29.082508 containerd[1565]: time="2025-07-12T00:14:29.081968909Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:14:29.092562 containerd[1565]: time="2025-07-12T00:14:29.092477544Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 00:14:29.092562 containerd[1565]: time="2025-07-12T00:14:29.092570418Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092595034Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092629128Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092649216Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092665115Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092691134Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092708687Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092721892Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 00:14:29.092732 containerd[1565]: time="2025-07-12T00:14:29.092734225Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 00:14:29.092952 containerd[1565]: time="2025-07-12T00:14:29.092746007Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 00:14:29.092952 containerd[1565]: time="2025-07-12T00:14:29.092763690Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 00:14:29.093031 containerd[1565]: time="2025-07-12T00:14:29.092965158Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 00:14:29.093031 containerd[1565]: time="2025-07-12T00:14:29.093011165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 00:14:29.093083 containerd[1565]: time="2025-07-12T00:14:29.093032795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 00:14:29.093083 containerd[1565]: time="2025-07-12T00:14:29.093048244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 00:14:29.093083 containerd[1565]: time="2025-07-12T00:14:29.093061820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 00:14:29.093083 containerd[1565]: time="2025-07-12T00:14:29.093075515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 00:14:29.093187 containerd[1565]: time="2025-07-12T00:14:29.093090063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 00:14:29.093187 containerd[1565]: time="2025-07-12T00:14:29.093103147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 00:14:29.093187 containerd[1565]: time="2025-07-12T00:14:29.093117093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 00:14:29.093187 containerd[1565]: time="2025-07-12T00:14:29.093132432Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 00:14:29.093187 containerd[1565]: time="2025-07-12T00:14:29.093147190Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 00:14:29.093324 containerd[1565]: time="2025-07-12T00:14:29.093294466Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 00:14:29.093353 containerd[1565]: time="2025-07-12T00:14:29.093327438Z" level=info msg="Start snapshots syncer" Jul 12 00:14:29.093383 containerd[1565]: time="2025-07-12T00:14:29.093354930Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 00:14:29.093917 containerd[1565]: time="2025-07-12T00:14:29.093642940Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 00:14:29.093917 containerd[1565]: time="2025-07-12T00:14:29.093714484Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 00:14:29.094731 containerd[1565]: time="2025-07-12T00:14:29.094699151Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 00:14:29.094899 containerd[1565]: time="2025-07-12T00:14:29.094868980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 00:14:29.094939 containerd[1565]: time="2025-07-12T00:14:29.094900119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 00:14:29.094939 containerd[1565]: time="2025-07-12T00:14:29.094911500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 00:14:29.094939 containerd[1565]: time="2025-07-12T00:14:29.094922721Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 00:14:29.094939 containerd[1565]: time="2025-07-12T00:14:29.094933822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 00:14:29.095097 containerd[1565]: time="2025-07-12T00:14:29.094944772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 00:14:29.095097 containerd[1565]: time="2025-07-12T00:14:29.094956003Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 00:14:29.095097 containerd[1565]: time="2025-07-12T00:14:29.094994876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 00:14:29.095097 containerd[1565]: time="2025-07-12T00:14:29.095005226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 00:14:29.095097 containerd[1565]: time="2025-07-12T00:14:29.095017358Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 00:14:29.095717 containerd[1565]: time="2025-07-12T00:14:29.095683498Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:14:29.095791 containerd[1565]: time="2025-07-12T00:14:29.095761164Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:14:29.095791 containerd[1565]: time="2025-07-12T00:14:29.095783977Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:14:29.095854 containerd[1565]: time="2025-07-12T00:14:29.095798554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:14:29.095854 containerd[1565]: time="2025-07-12T00:14:29.095827388Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 00:14:29.095854 containerd[1565]: time="2025-07-12T00:14:29.095839801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 00:14:29.095929 containerd[1565]: time="2025-07-12T00:14:29.095857094Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 00:14:29.095929 containerd[1565]: time="2025-07-12T00:14:29.095879927Z" level=info msg="runtime interface created" Jul 12 00:14:29.095929 containerd[1565]: time="2025-07-12T00:14:29.095886960Z" level=info msg="created NRI interface" Jul 12 00:14:29.095929 containerd[1565]: time="2025-07-12T00:14:29.095900686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 00:14:29.095929 containerd[1565]: time="2025-07-12T00:14:29.095913900Z" level=info msg="Connect containerd service" Jul 12 00:14:29.096076 containerd[1565]: time="2025-07-12T00:14:29.095938166Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:14:29.097320 containerd[1565]: time="2025-07-12T00:14:29.097274263Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:14:29.115701 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:14:29.120258 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:14:29.124211 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:58172.service - OpenSSH per-connection server daemon (10.0.0.1:58172). Jul 12 00:14:29.142125 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:14:29.143016 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:14:29.147632 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:14:29.171434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:14:29.176323 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:14:29.181234 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:14:29.182731 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:14:29.219289 containerd[1565]: time="2025-07-12T00:14:29.219162083Z" level=info msg="Start subscribing containerd event" Jul 12 00:14:29.219289 containerd[1565]: time="2025-07-12T00:14:29.219238717Z" level=info msg="Start recovering state" Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219350787Z" level=info msg="Start event monitor" Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219367208Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219375153Z" level=info msg="Start streaming server" Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219390381Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219398356Z" level=info msg="runtime interface starting up..." Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219405419Z" level=info msg="starting plugins..." Jul 12 00:14:29.219478 containerd[1565]: time="2025-07-12T00:14:29.219424325Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 00:14:29.219671 containerd[1565]: time="2025-07-12T00:14:29.219396052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:14:29.219671 containerd[1565]: time="2025-07-12T00:14:29.219587862Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:14:29.219772 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:14:29.220810 containerd[1565]: time="2025-07-12T00:14:29.220512677Z" level=info msg="containerd successfully booted in 0.159049s" Jul 12 00:14:29.230971 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 58172 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:29.232399 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:29.239347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:14:29.241601 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:14:29.289223 systemd-logind[1543]: New session 1 of user core. Jul 12 00:14:29.588195 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:14:29.594793 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:14:29.616916 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:14:29.620385 systemd-logind[1543]: New session c1 of user core. Jul 12 00:14:29.675838 tar[1563]: linux-amd64/README.md Jul 12 00:14:29.698906 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:14:29.822458 systemd[1662]: Queued start job for default target default.target. Jul 12 00:14:29.842628 systemd[1662]: Created slice app.slice - User Application Slice. Jul 12 00:14:29.842662 systemd[1662]: Reached target paths.target - Paths. Jul 12 00:14:29.842707 systemd[1662]: Reached target timers.target - Timers. Jul 12 00:14:29.844434 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:14:29.857677 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:14:29.857851 systemd[1662]: Reached target sockets.target - Sockets. Jul 12 00:14:29.857904 systemd[1662]: Reached target basic.target - Basic System. Jul 12 00:14:29.857955 systemd[1662]: Reached target default.target - Main User Target. Jul 12 00:14:29.858023 systemd[1662]: Startup finished in 225ms. Jul 12 00:14:29.858188 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:14:29.868406 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:14:29.939123 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:58186.service - OpenSSH per-connection server daemon (10.0.0.1:58186). Jul 12 00:14:30.004372 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 58186 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:30.009043 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:30.015839 systemd-logind[1543]: New session 2 of user core. Jul 12 00:14:30.034341 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:14:30.090383 sshd[1678]: Connection closed by 10.0.0.1 port 58186 Jul 12 00:14:30.090792 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:30.105124 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:58186.service: Deactivated successfully. Jul 12 00:14:30.107160 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:14:30.108147 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:14:30.111599 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:58200.service - OpenSSH per-connection server daemon (10.0.0.1:58200). Jul 12 00:14:30.180600 systemd-logind[1543]: Removed session 2. Jul 12 00:14:30.222785 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 58200 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:30.224733 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:30.230321 systemd-logind[1543]: New session 3 of user core. Jul 12 00:14:30.241356 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:14:30.302570 sshd[1687]: Connection closed by 10.0.0.1 port 58200 Jul 12 00:14:30.302906 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:30.306586 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:58200.service: Deactivated successfully. Jul 12 00:14:30.308676 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:14:30.310378 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:14:30.312034 systemd-logind[1543]: Removed session 3. Jul 12 00:14:30.456556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:30.458694 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:14:30.460255 systemd[1]: Startup finished in 3.594s (kernel) + 8.708s (initrd) + 8.106s (userspace) = 20.409s. Jul 12 00:14:30.493405 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:31.501642 kubelet[1697]: E0712 00:14:31.501392 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:31.506179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:31.506407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:31.506836 systemd[1]: kubelet.service: Consumed 2.074s CPU time, 269.1M memory peak. Jul 12 00:14:40.319522 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:33140.service - OpenSSH per-connection server daemon (10.0.0.1:33140). Jul 12 00:14:40.378790 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 33140 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:40.380612 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:40.385755 systemd-logind[1543]: New session 4 of user core. Jul 12 00:14:40.397286 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:14:40.453140 sshd[1713]: Connection closed by 10.0.0.1 port 33140 Jul 12 00:14:40.453477 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:40.466839 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:33140.service: Deactivated successfully. Jul 12 00:14:40.468779 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:14:40.469716 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:14:40.472941 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Jul 12 00:14:40.473797 systemd-logind[1543]: Removed session 4. Jul 12 00:14:40.528164 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:40.529935 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:40.535464 systemd-logind[1543]: New session 5 of user core. Jul 12 00:14:40.545334 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:14:40.597428 sshd[1721]: Connection closed by 10.0.0.1 port 33154 Jul 12 00:14:40.597866 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:40.608790 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:33154.service: Deactivated successfully. Jul 12 00:14:40.610712 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:14:40.611483 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:14:40.614709 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:33170.service - OpenSSH per-connection server daemon (10.0.0.1:33170). Jul 12 00:14:40.615464 systemd-logind[1543]: Removed session 5. Jul 12 00:14:40.674262 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 33170 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:40.675824 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:40.680242 systemd-logind[1543]: New session 6 of user core. Jul 12 00:14:40.693104 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:14:40.750455 sshd[1730]: Connection closed by 10.0.0.1 port 33170 Jul 12 00:14:40.750903 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:40.764544 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:33170.service: Deactivated successfully. Jul 12 00:14:40.766483 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:14:40.767351 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:14:40.770115 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:33176.service - OpenSSH per-connection server daemon (10.0.0.1:33176). Jul 12 00:14:40.770851 systemd-logind[1543]: Removed session 6. Jul 12 00:14:40.825058 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 33176 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:40.826667 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:40.832301 systemd-logind[1543]: New session 7 of user core. Jul 12 00:14:40.846388 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:14:40.911038 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:14:40.911455 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:40.934619 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:40.936644 sshd[1738]: Connection closed by 10.0.0.1 port 33176 Jul 12 00:14:40.937245 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:40.959230 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:33176.service: Deactivated successfully. Jul 12 00:14:40.960873 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:14:40.961896 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:14:40.965326 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:33178.service - OpenSSH per-connection server daemon (10.0.0.1:33178). Jul 12 00:14:40.966009 systemd-logind[1543]: Removed session 7. Jul 12 00:14:41.017237 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 33178 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:41.018760 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:41.023746 systemd-logind[1543]: New session 8 of user core. Jul 12 00:14:41.032275 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:14:41.086201 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:14:41.086501 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:41.466194 sudo[1749]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:41.472792 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 00:14:41.473126 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:41.483737 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:14:41.540329 augenrules[1771]: No rules Jul 12 00:14:41.542205 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:14:41.542475 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:14:41.543671 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 12 00:14:41.543931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:14:41.545324 sshd[1747]: Connection closed by 10.0.0.1 port 33178 Jul 12 00:14:41.545608 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:41.546366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:41.559674 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:33178.service: Deactivated successfully. Jul 12 00:14:41.562745 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:14:41.563653 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:14:41.568248 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:33196.service - OpenSSH per-connection server daemon (10.0.0.1:33196). Jul 12 00:14:41.569019 systemd-logind[1543]: Removed session 8. Jul 12 00:14:41.613065 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 33196 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:14:41.615224 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:41.623051 systemd-logind[1543]: New session 9 of user core. Jul 12 00:14:41.633335 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:14:41.692791 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:14:41.693238 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:14:41.762008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:41.776538 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:41.828705 kubelet[1802]: E0712 00:14:41.828609 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:41.836886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:41.837137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:41.837627 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.5M memory peak. Jul 12 00:14:42.270420 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:14:42.287712 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:14:42.803062 dockerd[1820]: time="2025-07-12T00:14:42.801459051Z" level=info msg="Starting up" Jul 12 00:14:42.812960 dockerd[1820]: time="2025-07-12T00:14:42.812383386Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 00:14:43.398628 dockerd[1820]: time="2025-07-12T00:14:43.398527014Z" level=info msg="Loading containers: start." Jul 12 00:14:43.423840 kernel: Initializing XFRM netlink socket Jul 12 00:14:44.106169 systemd-networkd[1476]: docker0: Link UP Jul 12 00:14:44.113168 dockerd[1820]: time="2025-07-12T00:14:44.113101617Z" level=info msg="Loading containers: done." Jul 12 00:14:44.130026 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3278670223-merged.mount: Deactivated successfully. Jul 12 00:14:44.132858 dockerd[1820]: time="2025-07-12T00:14:44.132794762Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:14:44.133001 dockerd[1820]: time="2025-07-12T00:14:44.132942429Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 12 00:14:44.133171 dockerd[1820]: time="2025-07-12T00:14:44.133140491Z" level=info msg="Initializing buildkit" Jul 12 00:14:44.172621 dockerd[1820]: time="2025-07-12T00:14:44.172535128Z" level=info msg="Completed buildkit initialization" Jul 12 00:14:44.177603 dockerd[1820]: time="2025-07-12T00:14:44.177528416Z" level=info msg="Daemon has completed initialization" Jul 12 00:14:44.177732 dockerd[1820]: time="2025-07-12T00:14:44.177663299Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:14:44.177939 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:14:44.819460 containerd[1565]: time="2025-07-12T00:14:44.819404561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 00:14:46.503031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281578550.mount: Deactivated successfully. Jul 12 00:14:48.366962 containerd[1565]: time="2025-07-12T00:14:48.366868968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:48.413676 containerd[1565]: time="2025-07-12T00:14:48.413593356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 12 00:14:48.497878 containerd[1565]: time="2025-07-12T00:14:48.497798190Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:48.547748 containerd[1565]: time="2025-07-12T00:14:48.547680193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:48.548744 containerd[1565]: time="2025-07-12T00:14:48.548706758Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.729249678s" Jul 12 00:14:48.548808 containerd[1565]: time="2025-07-12T00:14:48.548743077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 12 00:14:48.549438 containerd[1565]: time="2025-07-12T00:14:48.549386413Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 00:14:52.087600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:14:52.089498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:14:52.364273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:14:52.440501 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:14:52.911002 kubelet[2092]: E0712 00:14:52.910923 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:14:52.915164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:14:52.915406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:14:52.915808 systemd[1]: kubelet.service: Consumed 758ms CPU time, 110.7M memory peak. Jul 12 00:14:53.668299 containerd[1565]: time="2025-07-12T00:14:53.668225251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:53.668997 containerd[1565]: time="2025-07-12T00:14:53.668932067Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 12 00:14:53.670673 containerd[1565]: time="2025-07-12T00:14:53.670614303Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:53.673021 containerd[1565]: time="2025-07-12T00:14:53.672949504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:53.673913 containerd[1565]: time="2025-07-12T00:14:53.673883245Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 5.124462438s" Jul 12 00:14:53.673944 containerd[1565]: time="2025-07-12T00:14:53.673918041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 12 00:14:53.674367 containerd[1565]: time="2025-07-12T00:14:53.674345583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 00:14:57.008241 containerd[1565]: time="2025-07-12T00:14:57.008144337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:57.104631 containerd[1565]: time="2025-07-12T00:14:57.104519394Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 12 00:14:57.265289 containerd[1565]: time="2025-07-12T00:14:57.265104853Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:57.412596 containerd[1565]: time="2025-07-12T00:14:57.412511629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:57.413656 containerd[1565]: time="2025-07-12T00:14:57.413579713Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 3.739201459s" Jul 12 00:14:57.413656 containerd[1565]: time="2025-07-12T00:14:57.413635387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 12 00:14:57.414158 containerd[1565]: time="2025-07-12T00:14:57.414123263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 00:14:58.794633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351933025.mount: Deactivated successfully. Jul 12 00:14:59.113300 containerd[1565]: time="2025-07-12T00:14:59.113142855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:59.113957 containerd[1565]: time="2025-07-12T00:14:59.113892341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 12 00:14:59.115267 containerd[1565]: time="2025-07-12T00:14:59.115214431Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:59.117247 containerd[1565]: time="2025-07-12T00:14:59.117210466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:59.118028 containerd[1565]: time="2025-07-12T00:14:59.117990760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.703823846s" Jul 12 00:14:59.118028 containerd[1565]: time="2025-07-12T00:14:59.118022870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 12 00:14:59.118524 containerd[1565]: time="2025-07-12T00:14:59.118496248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 00:14:59.786377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866940919.mount: Deactivated successfully. Jul 12 00:15:01.327997 containerd[1565]: time="2025-07-12T00:15:01.327915662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:01.328819 containerd[1565]: time="2025-07-12T00:15:01.328779669Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 12 00:15:01.329967 containerd[1565]: time="2025-07-12T00:15:01.329916201Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:01.332912 containerd[1565]: time="2025-07-12T00:15:01.332865660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:01.334235 containerd[1565]: time="2025-07-12T00:15:01.334191218Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.215666034s" Jul 12 00:15:01.334235 containerd[1565]: time="2025-07-12T00:15:01.334228590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 12 00:15:01.334897 containerd[1565]: time="2025-07-12T00:15:01.334859927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:15:02.061531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992238841.mount: Deactivated successfully. Jul 12 00:15:02.068298 containerd[1565]: time="2025-07-12T00:15:02.068226348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:15:02.068992 containerd[1565]: time="2025-07-12T00:15:02.068910074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 00:15:02.070193 containerd[1565]: time="2025-07-12T00:15:02.070136987Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:15:02.072863 containerd[1565]: time="2025-07-12T00:15:02.072818561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:15:02.073695 containerd[1565]: time="2025-07-12T00:15:02.073644130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 738.748595ms" Jul 12 00:15:02.073725 containerd[1565]: time="2025-07-12T00:15:02.073691673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 00:15:02.074297 containerd[1565]: time="2025-07-12T00:15:02.074211584Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 00:15:02.604545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239491749.mount: Deactivated successfully. Jul 12 00:15:03.166262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:15:03.170248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:03.934348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:03.939317 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:15:04.084732 kubelet[2195]: E0712 00:15:04.084642 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:15:04.089674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:15:04.089898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:15:04.090342 systemd[1]: kubelet.service: Consumed 306ms CPU time, 112.1M memory peak. Jul 12 00:15:10.425634 containerd[1565]: time="2025-07-12T00:15:10.425532352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:10.474490 containerd[1565]: time="2025-07-12T00:15:10.474415678Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 12 00:15:10.509900 containerd[1565]: time="2025-07-12T00:15:10.509819496Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:10.543833 containerd[1565]: time="2025-07-12T00:15:10.543748062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:10.544915 containerd[1565]: time="2025-07-12T00:15:10.544865591Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.470621164s" Jul 12 00:15:10.544915 containerd[1565]: time="2025-07-12T00:15:10.544910417Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 12 00:15:13.742103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:13.742314 systemd[1]: kubelet.service: Consumed 306ms CPU time, 112.1M memory peak. Jul 12 00:15:13.744574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:13.769910 systemd[1]: Reload requested from client PID 2277 ('systemctl') (unit session-9.scope)... Jul 12 00:15:13.769929 systemd[1]: Reloading... Jul 12 00:15:13.858203 zram_generator::config[2316]: No configuration found. Jul 12 00:15:14.081881 update_engine[1555]: I20250712 00:15:14.081694 1555 update_attempter.cc:509] Updating boot flags... Jul 12 00:15:14.231634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:15:14.356963 systemd[1]: Reloading finished in 586 ms. Jul 12 00:15:14.419959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:14.434410 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:15:14.465025 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:14.498655 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:15:14.498943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:14.499040 systemd[1]: kubelet.service: Consumed 276ms CPU time, 106.1M memory peak. Jul 12 00:15:14.500909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:14.703164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:14.708769 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:15:14.747858 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:15:14.747858 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:15:14.747858 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:15:14.748333 kubelet[2390]: I0712 00:15:14.747882 2390 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:15:15.157903 kubelet[2390]: I0712 00:15:15.157844 2390 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:15:15.157903 kubelet[2390]: I0712 00:15:15.157872 2390 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:15:15.158111 kubelet[2390]: I0712 00:15:15.158092 2390 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:15:15.187089 kubelet[2390]: E0712 00:15:15.187034 2390 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:15:15.188306 kubelet[2390]: I0712 00:15:15.188285 2390 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:15:15.195677 kubelet[2390]: I0712 00:15:15.195644 2390 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:15:15.201951 kubelet[2390]: I0712 00:15:15.201925 2390 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:15:15.202221 kubelet[2390]: I0712 00:15:15.202189 2390 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:15:15.202367 kubelet[2390]: I0712 00:15:15.202215 2390 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:15:15.202491 kubelet[2390]: I0712 00:15:15.202372 2390 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:15:15.202491 kubelet[2390]: I0712 00:15:15.202380 2390 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:15:15.203446 kubelet[2390]: I0712 00:15:15.203425 2390 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:15:15.207256 kubelet[2390]: I0712 00:15:15.207231 2390 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:15:15.207256 kubelet[2390]: I0712 00:15:15.207251 2390 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:15:15.207327 kubelet[2390]: I0712 00:15:15.207274 2390 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:15:15.207327 kubelet[2390]: I0712 00:15:15.207290 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:15:15.211540 kubelet[2390]: I0712 00:15:15.211471 2390 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:15:15.211792 kubelet[2390]: E0712 00:15:15.211747 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:15:15.212093 kubelet[2390]: I0712 00:15:15.212063 2390 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:15:15.212163 kubelet[2390]: E0712 00:15:15.212136 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:15:15.212817 kubelet[2390]: W0712 00:15:15.212784 2390 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:15:15.215844 kubelet[2390]: I0712 00:15:15.215814 2390 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:15:15.215899 kubelet[2390]: I0712 00:15:15.215866 2390 server.go:1289] "Started kubelet" Jul 12 00:15:15.218046 kubelet[2390]: I0712 00:15:15.217948 2390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:15:15.218992 kubelet[2390]: I0712 00:15:15.218309 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:15:15.218992 kubelet[2390]: I0712 00:15:15.218365 2390 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:15:15.218992 kubelet[2390]: I0712 00:15:15.218423 2390 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:15:15.219537 kubelet[2390]: I0712 00:15:15.219491 2390 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:15:15.221124 kubelet[2390]: I0712 00:15:15.220926 2390 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:15:15.221124 kubelet[2390]: I0712 00:15:15.221037 2390 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:15:15.225569 kubelet[2390]: I0712 00:15:15.225518 2390 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:15:15.225681 kubelet[2390]: I0712 00:15:15.225661 2390 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:15:15.227028 kubelet[2390]: E0712 00:15:15.226956 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:15:15.228045 kubelet[2390]: I0712 00:15:15.228019 2390 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:15:15.229214 kubelet[2390]: I0712 00:15:15.228118 2390 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:15:15.229370 kubelet[2390]: E0712 00:15:15.227487 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158c0d2b4653f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:15:15.215836479 +0000 UTC m=+0.502375578,LastTimestamp:2025-07-12 00:15:15.215836479 +0000 UTC m=+0.502375578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:15:15.229513 kubelet[2390]: E0712 00:15:15.229492 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:15:15.229935 kubelet[2390]: E0712 00:15:15.229911 2390 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:15:15.230038 kubelet[2390]: E0712 00:15:15.229915 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" Jul 12 00:15:15.230797 kubelet[2390]: I0712 00:15:15.230779 2390 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:15:15.248502 kubelet[2390]: I0712 00:15:15.248470 2390 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:15:15.248502 kubelet[2390]: I0712 00:15:15.248486 2390 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:15:15.248502 kubelet[2390]: I0712 00:15:15.248501 2390 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:15:15.251137 kubelet[2390]: I0712 00:15:15.251047 2390 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:15:15.252756 kubelet[2390]: I0712 00:15:15.252716 2390 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:15:15.252817 kubelet[2390]: I0712 00:15:15.252761 2390 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:15:15.252817 kubelet[2390]: I0712 00:15:15.252779 2390 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:15:15.252817 kubelet[2390]: I0712 00:15:15.252787 2390 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:15:15.253165 kubelet[2390]: E0712 00:15:15.253140 2390 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:15:15.253679 kubelet[2390]: E0712 00:15:15.253619 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:15:15.327745 kubelet[2390]: E0712 00:15:15.327697 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:15:15.354069 kubelet[2390]: E0712 00:15:15.354010 2390 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:15:15.428711 kubelet[2390]: E0712 00:15:15.428589 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:15:15.431219 kubelet[2390]: E0712 00:15:15.431163 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" Jul 12 00:15:15.529794 kubelet[2390]: E0712 00:15:15.529724 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:15:15.555048 kubelet[2390]: E0712 00:15:15.554956 2390 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:15:15.555048 kubelet[2390]: I0712 00:15:15.555034 2390 policy_none.go:49] "None policy: Start" Jul 12 00:15:15.555048 kubelet[2390]: I0712 00:15:15.555066 2390 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:15:15.555252 kubelet[2390]: I0712 00:15:15.555082 2390 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:15:15.563577 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:15:15.575856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:15:15.579191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:15:15.603488 kubelet[2390]: E0712 00:15:15.603377 2390 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:15:15.603765 kubelet[2390]: I0712 00:15:15.603652 2390 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:15:15.603765 kubelet[2390]: I0712 00:15:15.603662 2390 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:15:15.604115 kubelet[2390]: I0712 00:15:15.604091 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:15:15.604830 kubelet[2390]: E0712 00:15:15.604801 2390 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:15:15.604874 kubelet[2390]: E0712 00:15:15.604853 2390 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:15:15.705476 kubelet[2390]: I0712 00:15:15.705347 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:15:15.705844 kubelet[2390]: E0712 00:15:15.705796 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 12 00:15:15.832642 kubelet[2390]: E0712 00:15:15.832584 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" Jul 12 00:15:15.907999 kubelet[2390]: I0712 00:15:15.907950 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:15:15.908478 kubelet[2390]: E0712 00:15:15.908430 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 12 00:15:15.971878 systemd[1]: Created slice kubepods-burstable-pod19c817aa0a0528ca72320b4d1a5015ff.slice - libcontainer container kubepods-burstable-pod19c817aa0a0528ca72320b4d1a5015ff.slice. Jul 12 00:15:15.982820 kubelet[2390]: E0712 00:15:15.982775 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:15.986206 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 12 00:15:15.988079 kubelet[2390]: E0712 00:15:15.988056 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:15.989789 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 12 00:15:15.991843 kubelet[2390]: E0712 00:15:15.991817 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:16.031316 kubelet[2390]: I0712 00:15:16.031279 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:16.031316 kubelet[2390]: I0712 00:15:16.031304 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:16.031316 kubelet[2390]: I0712 00:15:16.031319 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:16.031445 kubelet[2390]: I0712 00:15:16.031335 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:16.031445 kubelet[2390]: I0712 00:15:16.031351 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:16.031445 kubelet[2390]: I0712 00:15:16.031364 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:16.031445 kubelet[2390]: I0712 00:15:16.031376 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:16.031588 kubelet[2390]: I0712 00:15:16.031465 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:16.031588 kubelet[2390]: I0712 00:15:16.031511 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:16.284347 kubelet[2390]: E0712 00:15:16.284178 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.285112 containerd[1565]: time="2025-07-12T00:15:16.285061571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19c817aa0a0528ca72320b4d1a5015ff,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:16.289307 kubelet[2390]: E0712 00:15:16.289256 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.289709 containerd[1565]: time="2025-07-12T00:15:16.289681271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:16.293118 kubelet[2390]: E0712 00:15:16.293080 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.293541 containerd[1565]: time="2025-07-12T00:15:16.293495432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:16.309616 kubelet[2390]: E0712 00:15:16.309515 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:15:16.310137 kubelet[2390]: I0712 00:15:16.310117 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:15:16.310452 kubelet[2390]: E0712 00:15:16.310431 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" Jul 12 00:15:16.321009 containerd[1565]: time="2025-07-12T00:15:16.320888545Z" level=info msg="connecting to shim f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57" address="unix:///run/containerd/s/5147db27e7b33c2d3129688843b6becd429ec0707d3f358075354b3dc6c55b15" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:16.324380 containerd[1565]: time="2025-07-12T00:15:16.324340239Z" level=info msg="connecting to shim 46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73" address="unix:///run/containerd/s/6b572d1614a62647f78711a2ccdbfc74785ab297b20d19700aa58c61425f5c21" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:16.343091 containerd[1565]: time="2025-07-12T00:15:16.342957862Z" level=info msg="connecting to shim 8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40" address="unix:///run/containerd/s/8cc6b49d294a23cb9fcd45bcf60e312a812d9e1c2ff5a8664972aa6719bcf659" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:16.354238 systemd[1]: Started cri-containerd-f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57.scope - libcontainer container f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57. Jul 12 00:15:16.363801 systemd[1]: Started cri-containerd-46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73.scope - libcontainer container 46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73. Jul 12 00:15:16.372285 systemd[1]: Started cri-containerd-8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40.scope - libcontainer container 8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40. Jul 12 00:15:16.418645 containerd[1565]: time="2025-07-12T00:15:16.418584842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19c817aa0a0528ca72320b4d1a5015ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57\"" Jul 12 00:15:16.420251 kubelet[2390]: E0712 00:15:16.420083 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.421041 containerd[1565]: time="2025-07-12T00:15:16.420946991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73\"" Jul 12 00:15:16.421469 kubelet[2390]: E0712 00:15:16.421346 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.426721 containerd[1565]: time="2025-07-12T00:15:16.426675363Z" level=info msg="CreateContainer within sandbox \"46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:15:16.427086 containerd[1565]: time="2025-07-12T00:15:16.427037500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40\"" Jul 12 00:15:16.427645 kubelet[2390]: E0712 00:15:16.427631 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:16.428277 containerd[1565]: time="2025-07-12T00:15:16.428224631Z" level=info msg="CreateContainer within sandbox \"f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:15:16.432649 containerd[1565]: time="2025-07-12T00:15:16.432620025Z" level=info msg="CreateContainer within sandbox \"8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:15:16.439770 containerd[1565]: time="2025-07-12T00:15:16.439742381Z" level=info msg="Container 3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:16.442456 containerd[1565]: time="2025-07-12T00:15:16.442427632Z" level=info msg="Container fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:16.447432 kubelet[2390]: E0712 00:15:16.447401 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:15:16.449522 containerd[1565]: time="2025-07-12T00:15:16.449471017Z" level=info msg="CreateContainer within sandbox \"f505d596edc1873bba665b659db82ff020dd60e3dd9ad4ad7616952cd22fbd57\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0\"" Jul 12 00:15:16.450477 containerd[1565]: time="2025-07-12T00:15:16.450452719Z" level=info msg="StartContainer for \"3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0\"" Jul 12 00:15:16.451531 containerd[1565]: time="2025-07-12T00:15:16.451498421Z" level=info msg="connecting to shim 3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0" address="unix:///run/containerd/s/5147db27e7b33c2d3129688843b6becd429ec0707d3f358075354b3dc6c55b15" protocol=ttrpc version=3 Jul 12 00:15:16.454065 containerd[1565]: time="2025-07-12T00:15:16.454016657Z" level=info msg="Container 03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:16.457779 containerd[1565]: time="2025-07-12T00:15:16.457729456Z" level=info msg="CreateContainer within sandbox \"46cc174fedc526486268668b172cafa2e731b53229516c0b453892345400af73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7\"" Jul 12 00:15:16.458405 containerd[1565]: time="2025-07-12T00:15:16.458380201Z" level=info msg="StartContainer for \"fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7\"" Jul 12 00:15:16.459289 containerd[1565]: time="2025-07-12T00:15:16.459268003Z" level=info msg="connecting to shim fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7" address="unix:///run/containerd/s/6b572d1614a62647f78711a2ccdbfc74785ab297b20d19700aa58c61425f5c21" protocol=ttrpc version=3 Jul 12 00:15:16.460825 containerd[1565]: time="2025-07-12T00:15:16.460786663Z" level=info msg="CreateContainer within sandbox \"8f5a25366b4174401d6bba41f9a2f2cc99e692d100684ef07e6eed81d97d1f40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff\"" Jul 12 00:15:16.461245 containerd[1565]: time="2025-07-12T00:15:16.461212732Z" level=info msg="StartContainer for \"03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff\"" Jul 12 00:15:16.462323 containerd[1565]: time="2025-07-12T00:15:16.462303179Z" level=info msg="connecting to shim 03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff" address="unix:///run/containerd/s/8cc6b49d294a23cb9fcd45bcf60e312a812d9e1c2ff5a8664972aa6719bcf659" protocol=ttrpc version=3 Jul 12 00:15:16.472166 systemd[1]: Started cri-containerd-3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0.scope - libcontainer container 3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0. Jul 12 00:15:16.475749 systemd[1]: Started cri-containerd-fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7.scope - libcontainer container fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7. Jul 12 00:15:16.489108 systemd[1]: Started cri-containerd-03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff.scope - libcontainer container 03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff. Jul 12 00:15:16.540946 containerd[1565]: time="2025-07-12T00:15:16.539768586Z" level=info msg="StartContainer for \"fede978dcf693d23ca3e1a1ead8e9d6cadfb35b2c6d2266e6fc36832bda612d7\" returns successfully" Jul 12 00:15:16.550543 containerd[1565]: time="2025-07-12T00:15:16.550500766Z" level=info msg="StartContainer for \"3338f49d861a5fbb416ac363d87a5015eda2fbb4e91f973a895d1a67372c5fe0\" returns successfully" Jul 12 00:15:16.558918 containerd[1565]: time="2025-07-12T00:15:16.558874333Z" level=info msg="StartContainer for \"03ec65a23c486696135008f4be0e9905be56f282bd0ca95d0a18c2159da9ccff\" returns successfully" Jul 12 00:15:16.573959 kubelet[2390]: E0712 00:15:16.573893 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.79:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:15:17.114331 kubelet[2390]: I0712 00:15:17.114300 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:15:17.261494 kubelet[2390]: E0712 00:15:17.261459 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:17.261646 kubelet[2390]: E0712 00:15:17.261590 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:17.265738 kubelet[2390]: E0712 00:15:17.265711 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:17.265834 kubelet[2390]: E0712 00:15:17.265819 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:17.266235 kubelet[2390]: E0712 00:15:17.266214 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:15:17.266319 kubelet[2390]: E0712 00:15:17.266303 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:17.685998 kubelet[2390]: E0712 00:15:17.685908 2390 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:15:17.774078 kubelet[2390]: I0712 00:15:17.774024 2390 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:15:17.827638 kubelet[2390]: I0712 00:15:17.827578 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:17.832541 kubelet[2390]: E0712 00:15:17.832515 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:17.832541 kubelet[2390]: I0712 00:15:17.832536 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:17.834034 kubelet[2390]: E0712 00:15:17.834014 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:17.834034 kubelet[2390]: I0712 00:15:17.834029 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:17.835107 kubelet[2390]: E0712 00:15:17.835070 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:18.209996 kubelet[2390]: I0712 00:15:18.209949 2390 apiserver.go:52] "Watching apiserver" Jul 12 00:15:18.225896 kubelet[2390]: I0712 00:15:18.225863 2390 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:15:18.266644 kubelet[2390]: I0712 00:15:18.266603 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:18.266814 kubelet[2390]: I0712 00:15:18.266693 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:18.268401 kubelet[2390]: E0712 00:15:18.268370 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:18.268470 kubelet[2390]: E0712 00:15:18.268371 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:18.268550 kubelet[2390]: E0712 00:15:18.268531 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:18.268593 kubelet[2390]: E0712 00:15:18.268534 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:19.657506 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-9.scope)... Jul 12 00:15:19.657521 systemd[1]: Reloading... Jul 12 00:15:19.749036 zram_generator::config[2720]: No configuration found. Jul 12 00:15:19.839115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:15:19.969766 systemd[1]: Reloading finished in 311 ms. Jul 12 00:15:20.006264 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:20.027519 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:15:20.027860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:20.027913 systemd[1]: kubelet.service: Consumed 992ms CPU time, 131.3M memory peak. Jul 12 00:15:20.029875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:15:20.263369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:15:20.270313 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:15:20.325946 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:15:20.325946 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:15:20.325946 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:15:20.326420 kubelet[2762]: I0712 00:15:20.326021 2762 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:15:20.335174 kubelet[2762]: I0712 00:15:20.335118 2762 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:15:20.335174 kubelet[2762]: I0712 00:15:20.335146 2762 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:15:20.335383 kubelet[2762]: I0712 00:15:20.335375 2762 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:15:20.336723 kubelet[2762]: I0712 00:15:20.336684 2762 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 00:15:20.338822 kubelet[2762]: I0712 00:15:20.338758 2762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:15:20.342641 kubelet[2762]: I0712 00:15:20.342616 2762 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:15:20.348424 kubelet[2762]: I0712 00:15:20.348384 2762 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:15:20.348688 kubelet[2762]: I0712 00:15:20.348649 2762 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:15:20.348841 kubelet[2762]: I0712 00:15:20.348675 2762 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:15:20.348945 kubelet[2762]: I0712 00:15:20.348844 2762 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:15:20.348945 kubelet[2762]: I0712 00:15:20.348852 2762 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:15:20.348945 kubelet[2762]: I0712 00:15:20.348893 2762 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:15:20.349109 kubelet[2762]: I0712 00:15:20.349077 2762 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:15:20.349109 kubelet[2762]: I0712 00:15:20.349102 2762 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:15:20.349186 kubelet[2762]: I0712 00:15:20.349130 2762 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:15:20.349186 kubelet[2762]: I0712 00:15:20.349150 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:15:20.350500 kubelet[2762]: I0712 00:15:20.350454 2762 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:15:20.354127 kubelet[2762]: I0712 00:15:20.354093 2762 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:15:20.358536 kubelet[2762]: I0712 00:15:20.358505 2762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:15:20.358689 kubelet[2762]: I0712 00:15:20.358559 2762 server.go:1289] "Started kubelet" Jul 12 00:15:20.359815 kubelet[2762]: I0712 00:15:20.359747 2762 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:15:20.361221 kubelet[2762]: I0712 00:15:20.361138 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:15:20.361439 kubelet[2762]: I0712 00:15:20.361415 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:15:20.361590 kubelet[2762]: I0712 00:15:20.361570 2762 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:15:20.362990 kubelet[2762]: I0712 00:15:20.361828 2762 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:15:20.362990 kubelet[2762]: I0712 00:15:20.362736 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:15:20.364656 kubelet[2762]: E0712 00:15:20.364631 2762 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:15:20.364850 kubelet[2762]: I0712 00:15:20.364825 2762 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:15:20.364995 kubelet[2762]: I0712 00:15:20.364953 2762 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:15:20.365190 kubelet[2762]: I0712 00:15:20.365144 2762 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:15:20.365990 kubelet[2762]: I0712 00:15:20.365580 2762 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:15:20.365990 kubelet[2762]: I0712 00:15:20.365665 2762 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:15:20.368041 kubelet[2762]: I0712 00:15:20.367637 2762 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:15:20.371574 kubelet[2762]: I0712 00:15:20.371531 2762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:15:20.384525 kubelet[2762]: I0712 00:15:20.384418 2762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:15:20.384525 kubelet[2762]: I0712 00:15:20.384469 2762 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:15:20.384525 kubelet[2762]: I0712 00:15:20.384503 2762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:15:20.384525 kubelet[2762]: I0712 00:15:20.384515 2762 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:15:20.384744 kubelet[2762]: E0712 00:15:20.384592 2762 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:15:20.407396 kubelet[2762]: I0712 00:15:20.407361 2762 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:15:20.407396 kubelet[2762]: I0712 00:15:20.407385 2762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:15:20.407396 kubelet[2762]: I0712 00:15:20.407403 2762 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:15:20.407628 kubelet[2762]: I0712 00:15:20.407541 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:15:20.407628 kubelet[2762]: I0712 00:15:20.407555 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:15:20.407628 kubelet[2762]: I0712 00:15:20.407573 2762 policy_none.go:49] "None policy: Start" Jul 12 00:15:20.407628 kubelet[2762]: I0712 00:15:20.407585 2762 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:15:20.407628 kubelet[2762]: I0712 00:15:20.407596 2762 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:15:20.407800 kubelet[2762]: I0712 00:15:20.407694 2762 state_mem.go:75] "Updated machine memory state" Jul 12 00:15:20.412082 kubelet[2762]: E0712 00:15:20.412062 2762 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:15:20.412402 kubelet[2762]: I0712 00:15:20.412360 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:15:20.412460 kubelet[2762]: I0712 00:15:20.412381 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:15:20.412930 kubelet[2762]: I0712 00:15:20.412559 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:15:20.415049 kubelet[2762]: E0712 00:15:20.415030 2762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:15:20.486173 kubelet[2762]: I0712 00:15:20.486113 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.486173 kubelet[2762]: I0712 00:15:20.486157 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:20.486375 kubelet[2762]: I0712 00:15:20.486119 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:20.522500 kubelet[2762]: I0712 00:15:20.522288 2762 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:15:20.531936 kubelet[2762]: I0712 00:15:20.531884 2762 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:15:20.532133 kubelet[2762]: I0712 00:15:20.532011 2762 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:15:20.567381 kubelet[2762]: I0712 00:15:20.567323 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.567381 kubelet[2762]: I0712 00:15:20.567363 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:20.567381 kubelet[2762]: I0712 00:15:20.567390 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:20.567653 kubelet[2762]: I0712 00:15:20.567413 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.567653 kubelet[2762]: I0712 00:15:20.567434 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.567653 kubelet[2762]: I0712 00:15:20.567452 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:15:20.567653 kubelet[2762]: I0712 00:15:20.567470 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c817aa0a0528ca72320b4d1a5015ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c817aa0a0528ca72320b4d1a5015ff\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:20.567653 kubelet[2762]: I0712 00:15:20.567496 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.567804 kubelet[2762]: I0712 00:15:20.567526 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:20.795544 kubelet[2762]: E0712 00:15:20.795266 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:20.795544 kubelet[2762]: E0712 00:15:20.795276 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:20.795544 kubelet[2762]: E0712 00:15:20.795277 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:21.351002 kubelet[2762]: I0712 00:15:21.350627 2762 apiserver.go:52] "Watching apiserver" Jul 12 00:15:21.366044 kubelet[2762]: I0712 00:15:21.366005 2762 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:15:21.399211 kubelet[2762]: I0712 00:15:21.399038 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:21.399792 kubelet[2762]: E0712 00:15:21.399393 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:21.399792 kubelet[2762]: I0712 00:15:21.399510 2762 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:21.407541 kubelet[2762]: E0712 00:15:21.407409 2762 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:15:21.409694 kubelet[2762]: E0712 00:15:21.409429 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:21.409694 kubelet[2762]: E0712 00:15:21.409478 2762 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:15:21.409694 kubelet[2762]: E0712 00:15:21.409634 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:21.431823 kubelet[2762]: I0712 00:15:21.431726 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.431708889 podStartE2EDuration="1.431708889s" podCreationTimestamp="2025-07-12 00:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:21.423343924 +0000 UTC m=+1.146324546" watchObservedRunningTime="2025-07-12 00:15:21.431708889 +0000 UTC m=+1.154689491" Jul 12 00:15:21.432066 kubelet[2762]: I0712 00:15:21.431844 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.43184132 podStartE2EDuration="1.43184132s" podCreationTimestamp="2025-07-12 00:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:21.431612416 +0000 UTC m=+1.154593028" watchObservedRunningTime="2025-07-12 00:15:21.43184132 +0000 UTC m=+1.154821922" Jul 12 00:15:21.444595 kubelet[2762]: I0712 00:15:21.444524 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.44450451 podStartE2EDuration="1.44450451s" podCreationTimestamp="2025-07-12 00:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:21.444352824 +0000 UTC m=+1.167333426" watchObservedRunningTime="2025-07-12 00:15:21.44450451 +0000 UTC m=+1.167485112" Jul 12 00:15:22.400878 kubelet[2762]: E0712 00:15:22.400835 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:22.401532 kubelet[2762]: E0712 00:15:22.400944 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:22.401532 kubelet[2762]: E0712 00:15:22.401208 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:23.402516 kubelet[2762]: E0712 00:15:23.402475 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:23.403064 kubelet[2762]: E0712 00:15:23.402484 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:26.796322 kubelet[2762]: I0712 00:15:26.796255 2762 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:15:26.796930 kubelet[2762]: I0712 00:15:26.796834 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:15:26.797021 containerd[1565]: time="2025-07-12T00:15:26.796629572Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:15:26.916195 kubelet[2762]: E0712 00:15:26.916141 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:27.411337 kubelet[2762]: E0712 00:15:27.410109 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:27.735068 systemd[1]: Created slice kubepods-besteffort-poda13c3fab_c649_492b_896e_b007d401d506.slice - libcontainer container kubepods-besteffort-poda13c3fab_c649_492b_896e_b007d401d506.slice. Jul 12 00:15:27.813660 kubelet[2762]: I0712 00:15:27.813600 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a13c3fab-c649-492b-896e-b007d401d506-kube-proxy\") pod \"kube-proxy-95vtp\" (UID: \"a13c3fab-c649-492b-896e-b007d401d506\") " pod="kube-system/kube-proxy-95vtp" Jul 12 00:15:27.813660 kubelet[2762]: I0712 00:15:27.813655 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a13c3fab-c649-492b-896e-b007d401d506-xtables-lock\") pod \"kube-proxy-95vtp\" (UID: \"a13c3fab-c649-492b-896e-b007d401d506\") " pod="kube-system/kube-proxy-95vtp" Jul 12 00:15:27.813660 kubelet[2762]: I0712 00:15:27.813679 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a13c3fab-c649-492b-896e-b007d401d506-lib-modules\") pod \"kube-proxy-95vtp\" (UID: \"a13c3fab-c649-492b-896e-b007d401d506\") " pod="kube-system/kube-proxy-95vtp" Jul 12 00:15:27.814341 kubelet[2762]: I0712 00:15:27.813704 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bc9\" (UniqueName: \"kubernetes.io/projected/a13c3fab-c649-492b-896e-b007d401d506-kube-api-access-27bc9\") pod \"kube-proxy-95vtp\" (UID: \"a13c3fab-c649-492b-896e-b007d401d506\") " pod="kube-system/kube-proxy-95vtp" Jul 12 00:15:27.977985 systemd[1]: Created slice kubepods-besteffort-pod84c3c33c_46b8_4882_96a1_21049879c1a0.slice - libcontainer container kubepods-besteffort-pod84c3c33c_46b8_4882_96a1_21049879c1a0.slice. Jul 12 00:15:28.015156 kubelet[2762]: I0712 00:15:28.014950 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84c3c33c-46b8-4882-96a1-21049879c1a0-var-lib-calico\") pod \"tigera-operator-747864d56d-8rzwl\" (UID: \"84c3c33c-46b8-4882-96a1-21049879c1a0\") " pod="tigera-operator/tigera-operator-747864d56d-8rzwl" Jul 12 00:15:28.015156 kubelet[2762]: I0712 00:15:28.015029 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h24g\" (UniqueName: \"kubernetes.io/projected/84c3c33c-46b8-4882-96a1-21049879c1a0-kube-api-access-7h24g\") pod \"tigera-operator-747864d56d-8rzwl\" (UID: \"84c3c33c-46b8-4882-96a1-21049879c1a0\") " pod="tigera-operator/tigera-operator-747864d56d-8rzwl" Jul 12 00:15:28.050137 kubelet[2762]: E0712 00:15:28.050088 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:28.050922 containerd[1565]: time="2025-07-12T00:15:28.050868236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95vtp,Uid:a13c3fab-c649-492b-896e-b007d401d506,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:28.080246 containerd[1565]: time="2025-07-12T00:15:28.080183785Z" level=info msg="connecting to shim 286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce" address="unix:///run/containerd/s/debb47e9d6e47a33c6235cd2f271bffc14f4b2dfe7a175e1848d8dce102db368" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:28.127269 systemd[1]: Started cri-containerd-286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce.scope - libcontainer container 286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce. Jul 12 00:15:28.165087 containerd[1565]: time="2025-07-12T00:15:28.165029307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95vtp,Uid:a13c3fab-c649-492b-896e-b007d401d506,Namespace:kube-system,Attempt:0,} returns sandbox id \"286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce\"" Jul 12 00:15:28.165966 kubelet[2762]: E0712 00:15:28.165926 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:28.171911 containerd[1565]: time="2025-07-12T00:15:28.171870395Z" level=info msg="CreateContainer within sandbox \"286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:15:28.184870 containerd[1565]: time="2025-07-12T00:15:28.183753238Z" level=info msg="Container 575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:28.194089 containerd[1565]: time="2025-07-12T00:15:28.194017439Z" level=info msg="CreateContainer within sandbox \"286fe502fcc38e56bbcea1b934a94e4b7021676ab54abf0dad892a7f7f8db9ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292\"" Jul 12 00:15:28.195852 containerd[1565]: time="2025-07-12T00:15:28.195793889Z" level=info msg="StartContainer for \"575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292\"" Jul 12 00:15:28.198574 containerd[1565]: time="2025-07-12T00:15:28.198518986Z" level=info msg="connecting to shim 575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292" address="unix:///run/containerd/s/debb47e9d6e47a33c6235cd2f271bffc14f4b2dfe7a175e1848d8dce102db368" protocol=ttrpc version=3 Jul 12 00:15:28.228237 systemd[1]: Started cri-containerd-575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292.scope - libcontainer container 575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292. Jul 12 00:15:28.282708 containerd[1565]: time="2025-07-12T00:15:28.282567386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8rzwl,Uid:84c3c33c-46b8-4882-96a1-21049879c1a0,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:15:28.283474 containerd[1565]: time="2025-07-12T00:15:28.283403351Z" level=info msg="StartContainer for \"575b6bfab8ca44cfc5d5ff82c9ccd9e560a906f57a3c5171ea460a85e193e292\" returns successfully" Jul 12 00:15:28.311887 containerd[1565]: time="2025-07-12T00:15:28.311821168Z" level=info msg="connecting to shim 743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae" address="unix:///run/containerd/s/7a56e2e9438f6a00ab396dad6d71e1ad8a67ba1bb02a0f2e6dfd5ec4a01791be" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:28.345163 systemd[1]: Started cri-containerd-743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae.scope - libcontainer container 743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae. Jul 12 00:15:28.401758 containerd[1565]: time="2025-07-12T00:15:28.401705861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8rzwl,Uid:84c3c33c-46b8-4882-96a1-21049879c1a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae\"" Jul 12 00:15:28.405442 containerd[1565]: time="2025-07-12T00:15:28.405392631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:15:28.414221 kubelet[2762]: E0712 00:15:28.414182 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:30.025056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969292739.mount: Deactivated successfully. Jul 12 00:15:30.913370 containerd[1565]: time="2025-07-12T00:15:30.913314272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:30.914087 containerd[1565]: time="2025-07-12T00:15:30.914044187Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 12 00:15:30.915196 containerd[1565]: time="2025-07-12T00:15:30.915175108Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:30.918031 containerd[1565]: time="2025-07-12T00:15:30.917988950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:30.918467 containerd[1565]: time="2025-07-12T00:15:30.918444899Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.512994038s" Jul 12 00:15:30.918514 containerd[1565]: time="2025-07-12T00:15:30.918470056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 12 00:15:30.924825 containerd[1565]: time="2025-07-12T00:15:30.924777672Z" level=info msg="CreateContainer within sandbox \"743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:15:30.931723 containerd[1565]: time="2025-07-12T00:15:30.931671130Z" level=info msg="Container a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:30.935360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201342140.mount: Deactivated successfully. Jul 12 00:15:30.939135 containerd[1565]: time="2025-07-12T00:15:30.939079278Z" level=info msg="CreateContainer within sandbox \"743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\"" Jul 12 00:15:30.939627 containerd[1565]: time="2025-07-12T00:15:30.939594329Z" level=info msg="StartContainer for \"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\"" Jul 12 00:15:30.941134 containerd[1565]: time="2025-07-12T00:15:30.940496078Z" level=info msg="connecting to shim a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59" address="unix:///run/containerd/s/7a56e2e9438f6a00ab396dad6d71e1ad8a67ba1bb02a0f2e6dfd5ec4a01791be" protocol=ttrpc version=3 Jul 12 00:15:31.003164 systemd[1]: Started cri-containerd-a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59.scope - libcontainer container a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59. Jul 12 00:15:31.037131 containerd[1565]: time="2025-07-12T00:15:31.037071538Z" level=info msg="StartContainer for \"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\" returns successfully" Jul 12 00:15:31.432308 kubelet[2762]: I0712 00:15:31.432231 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8rzwl" podStartSLOduration=1.916927348 podStartE2EDuration="4.432211383s" podCreationTimestamp="2025-07-12 00:15:27 +0000 UTC" firstStartedPulling="2025-07-12 00:15:28.40384295 +0000 UTC m=+8.126823552" lastFinishedPulling="2025-07-12 00:15:30.919126985 +0000 UTC m=+10.642107587" observedRunningTime="2025-07-12 00:15:31.431390357 +0000 UTC m=+11.154370959" watchObservedRunningTime="2025-07-12 00:15:31.432211383 +0000 UTC m=+11.155191985" Jul 12 00:15:31.432855 kubelet[2762]: I0712 00:15:31.432505 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95vtp" podStartSLOduration=4.432496961 podStartE2EDuration="4.432496961s" podCreationTimestamp="2025-07-12 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:28.426503833 +0000 UTC m=+8.149484435" watchObservedRunningTime="2025-07-12 00:15:31.432496961 +0000 UTC m=+11.155477563" Jul 12 00:15:31.907034 kubelet[2762]: E0712 00:15:31.906939 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:31.921184 kubelet[2762]: E0712 00:15:31.921137 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:32.423713 kubelet[2762]: E0712 00:15:32.423653 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:33.206481 systemd[1]: cri-containerd-a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59.scope: Deactivated successfully. Jul 12 00:15:33.210223 containerd[1565]: time="2025-07-12T00:15:33.210184903Z" level=info msg="received exit event container_id:\"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\" id:\"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\" pid:3095 exit_status:1 exited_at:{seconds:1752279333 nanos:209777016}" Jul 12 00:15:33.211225 containerd[1565]: time="2025-07-12T00:15:33.211199834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\" id:\"a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59\" pid:3095 exit_status:1 exited_at:{seconds:1752279333 nanos:209777016}" Jul 12 00:15:33.259274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59-rootfs.mount: Deactivated successfully. Jul 12 00:15:34.427816 kubelet[2762]: I0712 00:15:34.427782 2762 scope.go:117] "RemoveContainer" containerID="a8102484c42b95c360cc5003b19947a0e09e77111671d6802d737a42005eff59" Jul 12 00:15:34.429771 containerd[1565]: time="2025-07-12T00:15:34.429717763Z" level=info msg="CreateContainer within sandbox \"743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:15:34.441714 containerd[1565]: time="2025-07-12T00:15:34.441482929Z" level=info msg="Container a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:34.448646 containerd[1565]: time="2025-07-12T00:15:34.448596348Z" level=info msg="CreateContainer within sandbox \"743ec20e7c6119dc1e77d675f324d96204308e805467563f54b20d039f44afae\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783\"" Jul 12 00:15:34.449194 containerd[1565]: time="2025-07-12T00:15:34.449169257Z" level=info msg="StartContainer for \"a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783\"" Jul 12 00:15:34.450251 containerd[1565]: time="2025-07-12T00:15:34.450205929Z" level=info msg="connecting to shim a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783" address="unix:///run/containerd/s/7a56e2e9438f6a00ab396dad6d71e1ad8a67ba1bb02a0f2e6dfd5ec4a01791be" protocol=ttrpc version=3 Jul 12 00:15:34.479245 systemd[1]: Started cri-containerd-a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783.scope - libcontainer container a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783. Jul 12 00:15:34.509306 containerd[1565]: time="2025-07-12T00:15:34.509255932Z" level=info msg="StartContainer for \"a427ae05d886cd0f441cc1a88abf5eb3c14741cd29bcd182651cd380d48eb783\" returns successfully" Jul 12 00:15:36.439744 sudo[1786]: pam_unix(sudo:session): session closed for user root Jul 12 00:15:36.441391 sshd[1785]: Connection closed by 10.0.0.1 port 33196 Jul 12 00:15:36.442302 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:36.447209 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:33196.service: Deactivated successfully. Jul 12 00:15:36.449584 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:15:36.449816 systemd[1]: session-9.scope: Consumed 5.661s CPU time, 225.8M memory peak. Jul 12 00:15:36.454007 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:15:36.455869 systemd-logind[1543]: Removed session 9. Jul 12 00:15:40.990906 systemd[1]: Created slice kubepods-besteffort-pod3ae315da_266f_418c_b95d_2a5bbaed1b76.slice - libcontainer container kubepods-besteffort-pod3ae315da_266f_418c_b95d_2a5bbaed1b76.slice. Jul 12 00:15:40.993672 kubelet[2762]: I0712 00:15:40.993623 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ae315da-266f-418c-b95d-2a5bbaed1b76-tigera-ca-bundle\") pod \"calico-typha-5966d47666-shd7j\" (UID: \"3ae315da-266f-418c-b95d-2a5bbaed1b76\") " pod="calico-system/calico-typha-5966d47666-shd7j" Jul 12 00:15:40.993672 kubelet[2762]: I0712 00:15:40.993672 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ae315da-266f-418c-b95d-2a5bbaed1b76-typha-certs\") pod \"calico-typha-5966d47666-shd7j\" (UID: \"3ae315da-266f-418c-b95d-2a5bbaed1b76\") " pod="calico-system/calico-typha-5966d47666-shd7j" Jul 12 00:15:40.995131 kubelet[2762]: I0712 00:15:40.993695 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cpbp\" (UniqueName: \"kubernetes.io/projected/3ae315da-266f-418c-b95d-2a5bbaed1b76-kube-api-access-8cpbp\") pod \"calico-typha-5966d47666-shd7j\" (UID: \"3ae315da-266f-418c-b95d-2a5bbaed1b76\") " pod="calico-system/calico-typha-5966d47666-shd7j" Jul 12 00:15:41.297283 kubelet[2762]: E0712 00:15:41.297160 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:41.297747 containerd[1565]: time="2025-07-12T00:15:41.297711093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966d47666-shd7j,Uid:3ae315da-266f-418c-b95d-2a5bbaed1b76,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:41.338590 containerd[1565]: time="2025-07-12T00:15:41.338543130Z" level=info msg="connecting to shim 652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1" address="unix:///run/containerd/s/03d2aa25bcc40b99cbfb460058d642c44e6df232ae537437b3685e1914f0f2fc" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:41.364649 systemd[1]: Created slice kubepods-besteffort-pod2305aed0_6974_466b_b381_55126c8d8b47.slice - libcontainer container kubepods-besteffort-pod2305aed0_6974_466b_b381_55126c8d8b47.slice. Jul 12 00:15:41.374212 systemd[1]: Started cri-containerd-652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1.scope - libcontainer container 652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1. Jul 12 00:15:41.396762 kubelet[2762]: I0712 00:15:41.396723 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-var-run-calico\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397004 kubelet[2762]: I0712 00:15:41.396923 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-cni-net-dir\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397004 kubelet[2762]: I0712 00:15:41.396943 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-lib-modules\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397134 kubelet[2762]: I0712 00:15:41.396958 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-cni-log-dir\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397233 kubelet[2762]: I0712 00:15:41.397185 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2305aed0-6974-466b-b381-55126c8d8b47-node-certs\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397233 kubelet[2762]: I0712 00:15:41.397206 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-xtables-lock\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397334 kubelet[2762]: I0712 00:15:41.397313 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-flexvol-driver-host\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397475 kubelet[2762]: I0712 00:15:41.397422 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-var-lib-calico\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397475 kubelet[2762]: I0712 00:15:41.397440 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6p6\" (UniqueName: \"kubernetes.io/projected/2305aed0-6974-466b-b381-55126c8d8b47-kube-api-access-cl6p6\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397475 kubelet[2762]: I0712 00:15:41.397456 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-cni-bin-dir\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397616 kubelet[2762]: I0712 00:15:41.397597 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2305aed0-6974-466b-b381-55126c8d8b47-tigera-ca-bundle\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.397724 kubelet[2762]: I0712 00:15:41.397710 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2305aed0-6974-466b-b381-55126c8d8b47-policysync\") pod \"calico-node-p78mq\" (UID: \"2305aed0-6974-466b-b381-55126c8d8b47\") " pod="calico-system/calico-node-p78mq" Jul 12 00:15:41.420347 containerd[1565]: time="2025-07-12T00:15:41.420289810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5966d47666-shd7j,Uid:3ae315da-266f-418c-b95d-2a5bbaed1b76,Namespace:calico-system,Attempt:0,} returns sandbox id \"652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1\"" Jul 12 00:15:41.421312 kubelet[2762]: E0712 00:15:41.421280 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:41.422453 containerd[1565]: time="2025-07-12T00:15:41.422427459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:15:41.501656 kubelet[2762]: E0712 00:15:41.501601 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.501656 kubelet[2762]: W0712 00:15:41.501624 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.502541 kubelet[2762]: E0712 00:15:41.502521 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.504744 kubelet[2762]: E0712 00:15:41.504598 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.504744 kubelet[2762]: W0712 00:15:41.504614 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.504744 kubelet[2762]: E0712 00:15:41.504628 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.508313 kubelet[2762]: E0712 00:15:41.508295 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.508313 kubelet[2762]: W0712 00:15:41.508310 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.508389 kubelet[2762]: E0712 00:15:41.508324 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.638341 kubelet[2762]: E0712 00:15:41.638206 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:41.670092 containerd[1565]: time="2025-07-12T00:15:41.670023658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p78mq,Uid:2305aed0-6974-466b-b381-55126c8d8b47,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:41.688540 kubelet[2762]: E0712 00:15:41.688500 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.688540 kubelet[2762]: W0712 00:15:41.688528 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.688540 kubelet[2762]: E0712 00:15:41.688552 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.688762 kubelet[2762]: E0712 00:15:41.688751 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.688762 kubelet[2762]: W0712 00:15:41.688760 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.688835 kubelet[2762]: E0712 00:15:41.688771 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.688982 kubelet[2762]: E0712 00:15:41.688961 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.689045 kubelet[2762]: W0712 00:15:41.689004 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.689045 kubelet[2762]: E0712 00:15:41.689016 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.689355 kubelet[2762]: E0712 00:15:41.689339 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.689355 kubelet[2762]: W0712 00:15:41.689351 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.689440 kubelet[2762]: E0712 00:15:41.689384 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.689628 kubelet[2762]: E0712 00:15:41.689600 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.689628 kubelet[2762]: W0712 00:15:41.689624 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.689702 kubelet[2762]: E0712 00:15:41.689634 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.689849 kubelet[2762]: E0712 00:15:41.689832 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.689849 kubelet[2762]: W0712 00:15:41.689843 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.689927 kubelet[2762]: E0712 00:15:41.689853 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.690116 kubelet[2762]: E0712 00:15:41.690100 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.690116 kubelet[2762]: W0712 00:15:41.690111 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.690220 kubelet[2762]: E0712 00:15:41.690121 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.690318 kubelet[2762]: E0712 00:15:41.690304 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.690318 kubelet[2762]: W0712 00:15:41.690314 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.690435 kubelet[2762]: E0712 00:15:41.690324 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.690529 kubelet[2762]: E0712 00:15:41.690515 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.690529 kubelet[2762]: W0712 00:15:41.690524 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.690658 kubelet[2762]: E0712 00:15:41.690536 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.690694 containerd[1565]: time="2025-07-12T00:15:41.690640680Z" level=info msg="connecting to shim 7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae" address="unix:///run/containerd/s/5e44c1bcfb1fe6567835aa469aa3b7edbc12aac5a0f6d53171ebb9369f794e59" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:15:41.690996 kubelet[2762]: E0712 00:15:41.690890 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.690996 kubelet[2762]: W0712 00:15:41.690905 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.690996 kubelet[2762]: E0712 00:15:41.690917 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.691290 kubelet[2762]: E0712 00:15:41.691274 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.691336 kubelet[2762]: W0712 00:15:41.691286 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.691336 kubelet[2762]: E0712 00:15:41.691318 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.691498 kubelet[2762]: E0712 00:15:41.691486 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.691498 kubelet[2762]: W0712 00:15:41.691496 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.691557 kubelet[2762]: E0712 00:15:41.691505 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.691679 kubelet[2762]: E0712 00:15:41.691666 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.691679 kubelet[2762]: W0712 00:15:41.691676 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.691728 kubelet[2762]: E0712 00:15:41.691687 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.691853 kubelet[2762]: E0712 00:15:41.691840 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.691853 kubelet[2762]: W0712 00:15:41.691850 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.691905 kubelet[2762]: E0712 00:15:41.691859 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.692074 kubelet[2762]: E0712 00:15:41.692053 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.692074 kubelet[2762]: W0712 00:15:41.692071 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.692121 kubelet[2762]: E0712 00:15:41.692080 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.692272 kubelet[2762]: E0712 00:15:41.692260 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.692304 kubelet[2762]: W0712 00:15:41.692272 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.692304 kubelet[2762]: E0712 00:15:41.692281 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.692481 kubelet[2762]: E0712 00:15:41.692468 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.692481 kubelet[2762]: W0712 00:15:41.692479 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.692531 kubelet[2762]: E0712 00:15:41.692487 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.692671 kubelet[2762]: E0712 00:15:41.692658 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.692671 kubelet[2762]: W0712 00:15:41.692669 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.692726 kubelet[2762]: E0712 00:15:41.692678 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.692873 kubelet[2762]: E0712 00:15:41.692856 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.692873 kubelet[2762]: W0712 00:15:41.692869 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.692946 kubelet[2762]: E0712 00:15:41.692878 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.693122 kubelet[2762]: E0712 00:15:41.693105 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.693122 kubelet[2762]: W0712 00:15:41.693117 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.693199 kubelet[2762]: E0712 00:15:41.693128 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.700562 kubelet[2762]: E0712 00:15:41.700539 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.700562 kubelet[2762]: W0712 00:15:41.700555 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.700562 kubelet[2762]: E0712 00:15:41.700569 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.700672 kubelet[2762]: I0712 00:15:41.700594 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2-socket-dir\") pod \"csi-node-driver-tcrj6\" (UID: \"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2\") " pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:41.700964 kubelet[2762]: E0712 00:15:41.700942 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.700964 kubelet[2762]: W0712 00:15:41.700956 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.701053 kubelet[2762]: E0712 00:15:41.700965 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.701088 kubelet[2762]: I0712 00:15:41.701077 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2-kubelet-dir\") pod \"csi-node-driver-tcrj6\" (UID: \"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2\") " pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:41.701503 kubelet[2762]: E0712 00:15:41.701487 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.701503 kubelet[2762]: W0712 00:15:41.701500 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.701567 kubelet[2762]: E0712 00:15:41.701511 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.701879 kubelet[2762]: E0712 00:15:41.701864 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.701879 kubelet[2762]: W0712 00:15:41.701876 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.701925 kubelet[2762]: E0712 00:15:41.701886 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.702259 kubelet[2762]: E0712 00:15:41.702242 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.702259 kubelet[2762]: W0712 00:15:41.702256 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.702321 kubelet[2762]: E0712 00:15:41.702267 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.702342 kubelet[2762]: I0712 00:15:41.702327 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2-registration-dir\") pod \"csi-node-driver-tcrj6\" (UID: \"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2\") " pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:41.702607 kubelet[2762]: E0712 00:15:41.702592 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.702607 kubelet[2762]: W0712 00:15:41.702605 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.702663 kubelet[2762]: E0712 00:15:41.702617 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.702752 kubelet[2762]: I0712 00:15:41.702733 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lktb\" (UniqueName: \"kubernetes.io/projected/e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2-kube-api-access-4lktb\") pod \"csi-node-driver-tcrj6\" (UID: \"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2\") " pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:41.702964 kubelet[2762]: E0712 00:15:41.702950 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.702964 kubelet[2762]: W0712 00:15:41.702961 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.703041 kubelet[2762]: E0712 00:15:41.702984 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.703322 kubelet[2762]: E0712 00:15:41.703308 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.703322 kubelet[2762]: W0712 00:15:41.703319 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.703375 kubelet[2762]: E0712 00:15:41.703330 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.703555 kubelet[2762]: E0712 00:15:41.703541 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.703555 kubelet[2762]: W0712 00:15:41.703553 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.703669 kubelet[2762]: E0712 00:15:41.703564 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.703669 kubelet[2762]: I0712 00:15:41.703616 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2-varrun\") pod \"csi-node-driver-tcrj6\" (UID: \"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2\") " pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:41.703780 kubelet[2762]: E0712 00:15:41.703766 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.703780 kubelet[2762]: W0712 00:15:41.703777 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.703832 kubelet[2762]: E0712 00:15:41.703787 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.703949 kubelet[2762]: E0712 00:15:41.703938 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.703949 kubelet[2762]: W0712 00:15:41.703946 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.704013 kubelet[2762]: E0712 00:15:41.703953 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.704169 kubelet[2762]: E0712 00:15:41.704155 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.704169 kubelet[2762]: W0712 00:15:41.704166 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.704223 kubelet[2762]: E0712 00:15:41.704174 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.704354 kubelet[2762]: E0712 00:15:41.704341 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.704354 kubelet[2762]: W0712 00:15:41.704350 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.704405 kubelet[2762]: E0712 00:15:41.704358 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.704574 kubelet[2762]: E0712 00:15:41.704561 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.704574 kubelet[2762]: W0712 00:15:41.704571 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.704624 kubelet[2762]: E0712 00:15:41.704580 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.704774 kubelet[2762]: E0712 00:15:41.704761 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.704774 kubelet[2762]: W0712 00:15:41.704770 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.704837 kubelet[2762]: E0712 00:15:41.704777 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.715535 systemd[1]: Started cri-containerd-7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae.scope - libcontainer container 7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae. Jul 12 00:15:41.754541 containerd[1565]: time="2025-07-12T00:15:41.754490923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p78mq,Uid:2305aed0-6974-466b-b381-55126c8d8b47,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\"" Jul 12 00:15:41.806126 kubelet[2762]: E0712 00:15:41.806080 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.806126 kubelet[2762]: W0712 00:15:41.806108 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.806126 kubelet[2762]: E0712 00:15:41.806132 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.806375 kubelet[2762]: E0712 00:15:41.806360 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.806375 kubelet[2762]: W0712 00:15:41.806369 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.806429 kubelet[2762]: E0712 00:15:41.806377 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.806715 kubelet[2762]: E0712 00:15:41.806687 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.806750 kubelet[2762]: W0712 00:15:41.806713 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.806750 kubelet[2762]: E0712 00:15:41.806741 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.807053 kubelet[2762]: E0712 00:15:41.807036 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.807053 kubelet[2762]: W0712 00:15:41.807047 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.807129 kubelet[2762]: E0712 00:15:41.807066 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.807291 kubelet[2762]: E0712 00:15:41.807268 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.807291 kubelet[2762]: W0712 00:15:41.807280 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.807291 kubelet[2762]: E0712 00:15:41.807289 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.807557 kubelet[2762]: E0712 00:15:41.807538 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.807557 kubelet[2762]: W0712 00:15:41.807553 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.807605 kubelet[2762]: E0712 00:15:41.807562 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.807752 kubelet[2762]: E0712 00:15:41.807736 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.807752 kubelet[2762]: W0712 00:15:41.807745 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.807752 kubelet[2762]: E0712 00:15:41.807753 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.807952 kubelet[2762]: E0712 00:15:41.807936 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.807952 kubelet[2762]: W0712 00:15:41.807945 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.808020 kubelet[2762]: E0712 00:15:41.807953 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.808184 kubelet[2762]: E0712 00:15:41.808166 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.808184 kubelet[2762]: W0712 00:15:41.808176 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.808184 kubelet[2762]: E0712 00:15:41.808183 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.808369 kubelet[2762]: E0712 00:15:41.808353 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.808369 kubelet[2762]: W0712 00:15:41.808362 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.808422 kubelet[2762]: E0712 00:15:41.808370 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.808604 kubelet[2762]: E0712 00:15:41.808585 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.808604 kubelet[2762]: W0712 00:15:41.808596 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.808657 kubelet[2762]: E0712 00:15:41.808605 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.808792 kubelet[2762]: E0712 00:15:41.808776 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.808792 kubelet[2762]: W0712 00:15:41.808785 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.808839 kubelet[2762]: E0712 00:15:41.808794 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.809049 kubelet[2762]: E0712 00:15:41.809032 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.809049 kubelet[2762]: W0712 00:15:41.809042 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.809106 kubelet[2762]: E0712 00:15:41.809051 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.809274 kubelet[2762]: E0712 00:15:41.809256 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.809274 kubelet[2762]: W0712 00:15:41.809266 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.809274 kubelet[2762]: E0712 00:15:41.809274 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.809476 kubelet[2762]: E0712 00:15:41.809458 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.809476 kubelet[2762]: W0712 00:15:41.809471 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.809524 kubelet[2762]: E0712 00:15:41.809479 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.809678 kubelet[2762]: E0712 00:15:41.809661 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.809678 kubelet[2762]: W0712 00:15:41.809672 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.809733 kubelet[2762]: E0712 00:15:41.809681 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.809869 kubelet[2762]: E0712 00:15:41.809852 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.809869 kubelet[2762]: W0712 00:15:41.809865 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.809915 kubelet[2762]: E0712 00:15:41.809874 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.810078 kubelet[2762]: E0712 00:15:41.810062 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.810078 kubelet[2762]: W0712 00:15:41.810072 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.810130 kubelet[2762]: E0712 00:15:41.810081 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.810277 kubelet[2762]: E0712 00:15:41.810260 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.810277 kubelet[2762]: W0712 00:15:41.810270 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.810324 kubelet[2762]: E0712 00:15:41.810278 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.810495 kubelet[2762]: E0712 00:15:41.810475 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.810495 kubelet[2762]: W0712 00:15:41.810488 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.810552 kubelet[2762]: E0712 00:15:41.810500 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.810768 kubelet[2762]: E0712 00:15:41.810748 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.810768 kubelet[2762]: W0712 00:15:41.810762 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.810854 kubelet[2762]: E0712 00:15:41.810774 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.811012 kubelet[2762]: E0712 00:15:41.810989 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.811012 kubelet[2762]: W0712 00:15:41.811001 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.811012 kubelet[2762]: E0712 00:15:41.811009 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.811247 kubelet[2762]: E0712 00:15:41.811229 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.811247 kubelet[2762]: W0712 00:15:41.811239 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.811247 kubelet[2762]: E0712 00:15:41.811248 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.811682 kubelet[2762]: E0712 00:15:41.811660 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.811682 kubelet[2762]: W0712 00:15:41.811674 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.811813 kubelet[2762]: E0712 00:15:41.811685 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.811983 kubelet[2762]: E0712 00:15:41.811945 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.811983 kubelet[2762]: W0712 00:15:41.811957 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.812081 kubelet[2762]: E0712 00:15:41.811968 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:41.819354 kubelet[2762]: E0712 00:15:41.819331 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:41.819354 kubelet[2762]: W0712 00:15:41.819345 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:41.819354 kubelet[2762]: E0712 00:15:41.819358 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:42.855678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727492970.mount: Deactivated successfully. Jul 12 00:15:43.385529 kubelet[2762]: E0712 00:15:43.385472 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:43.678392 containerd[1565]: time="2025-07-12T00:15:43.678331024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:43.679153 containerd[1565]: time="2025-07-12T00:15:43.679103426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 12 00:15:43.680343 containerd[1565]: time="2025-07-12T00:15:43.680310525Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:43.682268 containerd[1565]: time="2025-07-12T00:15:43.682232437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:43.682946 containerd[1565]: time="2025-07-12T00:15:43.682916693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.260460048s" Jul 12 00:15:43.682994 containerd[1565]: time="2025-07-12T00:15:43.682947080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 12 00:15:43.686639 containerd[1565]: time="2025-07-12T00:15:43.684427993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:15:43.699088 containerd[1565]: time="2025-07-12T00:15:43.699043455Z" level=info msg="CreateContainer within sandbox \"652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:15:43.710304 containerd[1565]: time="2025-07-12T00:15:43.709532079Z" level=info msg="Container 35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:43.717951 containerd[1565]: time="2025-07-12T00:15:43.717917921Z" level=info msg="CreateContainer within sandbox \"652e7a3c885c4df29f4724b6efcd4fd01de62185e0f9c38dd7fbc91939a8fda1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d\"" Jul 12 00:15:43.718473 containerd[1565]: time="2025-07-12T00:15:43.718443509Z" level=info msg="StartContainer for \"35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d\"" Jul 12 00:15:43.719534 containerd[1565]: time="2025-07-12T00:15:43.719497370Z" level=info msg="connecting to shim 35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d" address="unix:///run/containerd/s/03d2aa25bcc40b99cbfb460058d642c44e6df232ae537437b3685e1914f0f2fc" protocol=ttrpc version=3 Jul 12 00:15:43.749145 systemd[1]: Started cri-containerd-35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d.scope - libcontainer container 35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d. Jul 12 00:15:43.797550 containerd[1565]: time="2025-07-12T00:15:43.797501075Z" level=info msg="StartContainer for \"35fab46531971539b0468a6743d935ed17427b9043f5e7120bfa0662bdbcb68d\" returns successfully" Jul 12 00:15:44.455959 kubelet[2762]: E0712 00:15:44.455923 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:44.467926 kubelet[2762]: I0712 00:15:44.467808 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5966d47666-shd7j" podStartSLOduration=2.206282761 podStartE2EDuration="4.46778576s" podCreationTimestamp="2025-07-12 00:15:40 +0000 UTC" firstStartedPulling="2025-07-12 00:15:41.422210472 +0000 UTC m=+21.145191074" lastFinishedPulling="2025-07-12 00:15:43.683713471 +0000 UTC m=+23.406694073" observedRunningTime="2025-07-12 00:15:44.466806039 +0000 UTC m=+24.189786651" watchObservedRunningTime="2025-07-12 00:15:44.46778576 +0000 UTC m=+24.190766362" Jul 12 00:15:44.511589 kubelet[2762]: E0712 00:15:44.511535 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.511589 kubelet[2762]: W0712 00:15:44.511562 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.511589 kubelet[2762]: E0712 00:15:44.511584 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.511816 kubelet[2762]: E0712 00:15:44.511782 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.511816 kubelet[2762]: W0712 00:15:44.511789 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.511816 kubelet[2762]: E0712 00:15:44.511798 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.512411 kubelet[2762]: E0712 00:15:44.512132 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.512411 kubelet[2762]: W0712 00:15:44.512164 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.512411 kubelet[2762]: E0712 00:15:44.512198 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.512672 kubelet[2762]: E0712 00:15:44.512594 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.512672 kubelet[2762]: W0712 00:15:44.512629 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.512804 kubelet[2762]: E0712 00:15:44.512676 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.513185 kubelet[2762]: E0712 00:15:44.513153 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.513185 kubelet[2762]: W0712 00:15:44.513167 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.513185 kubelet[2762]: E0712 00:15:44.513179 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.513405 kubelet[2762]: E0712 00:15:44.513383 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.513405 kubelet[2762]: W0712 00:15:44.513395 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.513405 kubelet[2762]: E0712 00:15:44.513405 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.513681 kubelet[2762]: E0712 00:15:44.513649 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.513681 kubelet[2762]: W0712 00:15:44.513673 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.513681 kubelet[2762]: E0712 00:15:44.513686 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.513925 kubelet[2762]: E0712 00:15:44.513907 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.513925 kubelet[2762]: W0712 00:15:44.513914 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.513925 kubelet[2762]: E0712 00:15:44.513922 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.514163 kubelet[2762]: E0712 00:15:44.514157 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.514197 kubelet[2762]: W0712 00:15:44.514165 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.514197 kubelet[2762]: E0712 00:15:44.514173 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.514459 kubelet[2762]: E0712 00:15:44.514438 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.514459 kubelet[2762]: W0712 00:15:44.514453 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.514543 kubelet[2762]: E0712 00:15:44.514465 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.514766 kubelet[2762]: E0712 00:15:44.514735 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.514766 kubelet[2762]: W0712 00:15:44.514751 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.514766 kubelet[2762]: E0712 00:15:44.514764 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.515475 kubelet[2762]: E0712 00:15:44.515441 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.515475 kubelet[2762]: W0712 00:15:44.515458 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.515475 kubelet[2762]: E0712 00:15:44.515480 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.518136 kubelet[2762]: E0712 00:15:44.518113 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.518136 kubelet[2762]: W0712 00:15:44.518128 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.518136 kubelet[2762]: E0712 00:15:44.518140 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.518373 kubelet[2762]: E0712 00:15:44.518316 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.518373 kubelet[2762]: W0712 00:15:44.518324 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.518373 kubelet[2762]: E0712 00:15:44.518332 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.518516 kubelet[2762]: E0712 00:15:44.518478 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.518516 kubelet[2762]: W0712 00:15:44.518486 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.518516 kubelet[2762]: E0712 00:15:44.518493 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.525771 kubelet[2762]: E0712 00:15:44.525721 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.525771 kubelet[2762]: W0712 00:15:44.525752 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.525771 kubelet[2762]: E0712 00:15:44.525777 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.526082 kubelet[2762]: E0712 00:15:44.526055 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.526082 kubelet[2762]: W0712 00:15:44.526069 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.526082 kubelet[2762]: E0712 00:15:44.526079 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.526310 kubelet[2762]: E0712 00:15:44.526285 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.526310 kubelet[2762]: W0712 00:15:44.526298 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.526310 kubelet[2762]: E0712 00:15:44.526307 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.526719 kubelet[2762]: E0712 00:15:44.526672 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.526719 kubelet[2762]: W0712 00:15:44.526710 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.526772 kubelet[2762]: E0712 00:15:44.526731 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.526931 kubelet[2762]: E0712 00:15:44.526910 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.526931 kubelet[2762]: W0712 00:15:44.526920 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.526931 kubelet[2762]: E0712 00:15:44.526929 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.527199 kubelet[2762]: E0712 00:15:44.527176 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.527199 kubelet[2762]: W0712 00:15:44.527188 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.527199 kubelet[2762]: E0712 00:15:44.527196 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.527415 kubelet[2762]: E0712 00:15:44.527401 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.527415 kubelet[2762]: W0712 00:15:44.527411 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.527466 kubelet[2762]: E0712 00:15:44.527419 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.527600 kubelet[2762]: E0712 00:15:44.527586 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.527600 kubelet[2762]: W0712 00:15:44.527595 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.527645 kubelet[2762]: E0712 00:15:44.527604 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.527793 kubelet[2762]: E0712 00:15:44.527778 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.527793 kubelet[2762]: W0712 00:15:44.527787 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.527842 kubelet[2762]: E0712 00:15:44.527795 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.528005 kubelet[2762]: E0712 00:15:44.527965 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.528005 kubelet[2762]: W0712 00:15:44.528001 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.528059 kubelet[2762]: E0712 00:15:44.528010 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.528185 kubelet[2762]: E0712 00:15:44.528172 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.528185 kubelet[2762]: W0712 00:15:44.528181 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.528229 kubelet[2762]: E0712 00:15:44.528188 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.528379 kubelet[2762]: E0712 00:15:44.528365 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.528379 kubelet[2762]: W0712 00:15:44.528375 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.528423 kubelet[2762]: E0712 00:15:44.528383 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.528663 kubelet[2762]: E0712 00:15:44.528647 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.528663 kubelet[2762]: W0712 00:15:44.528660 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.528720 kubelet[2762]: E0712 00:15:44.528670 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.528860 kubelet[2762]: E0712 00:15:44.528845 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.528860 kubelet[2762]: W0712 00:15:44.528854 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.528911 kubelet[2762]: E0712 00:15:44.528862 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.529059 kubelet[2762]: E0712 00:15:44.529045 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.529059 kubelet[2762]: W0712 00:15:44.529054 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.529107 kubelet[2762]: E0712 00:15:44.529062 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.529282 kubelet[2762]: E0712 00:15:44.529268 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.529282 kubelet[2762]: W0712 00:15:44.529278 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.529329 kubelet[2762]: E0712 00:15:44.529285 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.529595 kubelet[2762]: E0712 00:15:44.529577 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.529595 kubelet[2762]: W0712 00:15:44.529591 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.529643 kubelet[2762]: E0712 00:15:44.529602 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:44.529815 kubelet[2762]: E0712 00:15:44.529801 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:15:44.529815 kubelet[2762]: W0712 00:15:44.529811 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:15:44.529857 kubelet[2762]: E0712 00:15:44.529819 2762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:15:45.176609 containerd[1565]: time="2025-07-12T00:15:45.176539514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:45.177368 containerd[1565]: time="2025-07-12T00:15:45.177331272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 12 00:15:45.178416 containerd[1565]: time="2025-07-12T00:15:45.178384220Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:45.180298 containerd[1565]: time="2025-07-12T00:15:45.180263322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:45.180839 containerd[1565]: time="2025-07-12T00:15:45.180796052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.496333855s" Jul 12 00:15:45.180873 containerd[1565]: time="2025-07-12T00:15:45.180836929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 12 00:15:45.186088 containerd[1565]: time="2025-07-12T00:15:45.186023676Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:15:45.196890 containerd[1565]: time="2025-07-12T00:15:45.196834692Z" level=info msg="Container e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:45.207218 containerd[1565]: time="2025-07-12T00:15:45.207145498Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\"" Jul 12 00:15:45.207641 containerd[1565]: time="2025-07-12T00:15:45.207605111Z" level=info msg="StartContainer for \"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\"" Jul 12 00:15:45.209749 containerd[1565]: time="2025-07-12T00:15:45.209712732Z" level=info msg="connecting to shim e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4" address="unix:///run/containerd/s/5e44c1bcfb1fe6567835aa469aa3b7edbc12aac5a0f6d53171ebb9369f794e59" protocol=ttrpc version=3 Jul 12 00:15:45.243271 systemd[1]: Started cri-containerd-e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4.scope - libcontainer container e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4. Jul 12 00:15:45.304250 containerd[1565]: time="2025-07-12T00:15:45.304196361Z" level=info msg="StartContainer for \"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\" returns successfully" Jul 12 00:15:45.309350 systemd[1]: cri-containerd-e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4.scope: Deactivated successfully. Jul 12 00:15:45.311660 containerd[1565]: time="2025-07-12T00:15:45.311576871Z" level=info msg="received exit event container_id:\"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\" id:\"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\" pid:3502 exited_at:{seconds:1752279345 nanos:311313756}" Jul 12 00:15:45.311660 containerd[1565]: time="2025-07-12T00:15:45.311654768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\" id:\"e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4\" pid:3502 exited_at:{seconds:1752279345 nanos:311313756}" Jul 12 00:15:45.334030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e74a0e3f3d246ebb424ef136ed8b8ab214abe3e4138a458251dbe9cc6a7794a4-rootfs.mount: Deactivated successfully. Jul 12 00:15:45.386348 kubelet[2762]: E0712 00:15:45.385853 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:45.459993 kubelet[2762]: I0712 00:15:45.459846 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:45.460542 kubelet[2762]: E0712 00:15:45.460509 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:46.465602 containerd[1565]: time="2025-07-12T00:15:46.465256092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:15:47.385758 kubelet[2762]: E0712 00:15:47.385701 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:49.385296 kubelet[2762]: E0712 00:15:49.385198 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:50.202252 containerd[1565]: time="2025-07-12T00:15:50.202179722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:50.203122 containerd[1565]: time="2025-07-12T00:15:50.203088619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 12 00:15:50.204161 containerd[1565]: time="2025-07-12T00:15:50.204128854Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:50.206280 containerd[1565]: time="2025-07-12T00:15:50.206242443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:15:50.206938 containerd[1565]: time="2025-07-12T00:15:50.206901020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.741588722s" Jul 12 00:15:50.206938 containerd[1565]: time="2025-07-12T00:15:50.206933992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 12 00:15:50.214001 containerd[1565]: time="2025-07-12T00:15:50.212236783Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:15:50.224201 containerd[1565]: time="2025-07-12T00:15:50.224125863Z" level=info msg="Container da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:15:50.235773 containerd[1565]: time="2025-07-12T00:15:50.235705784Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\"" Jul 12 00:15:50.238534 containerd[1565]: time="2025-07-12T00:15:50.236228086Z" level=info msg="StartContainer for \"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\"" Jul 12 00:15:50.238534 containerd[1565]: time="2025-07-12T00:15:50.237860131Z" level=info msg="connecting to shim da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe" address="unix:///run/containerd/s/5e44c1bcfb1fe6567835aa469aa3b7edbc12aac5a0f6d53171ebb9369f794e59" protocol=ttrpc version=3 Jul 12 00:15:50.265144 systemd[1]: Started cri-containerd-da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe.scope - libcontainer container da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe. Jul 12 00:15:50.311814 containerd[1565]: time="2025-07-12T00:15:50.311769846Z" level=info msg="StartContainer for \"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\" returns successfully" Jul 12 00:15:51.451020 kubelet[2762]: E0712 00:15:51.450357 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:15:51.647162 systemd[1]: cri-containerd-da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe.scope: Deactivated successfully. Jul 12 00:15:51.647695 systemd[1]: cri-containerd-da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe.scope: Consumed 594ms CPU time, 178.2M memory peak, 3.7M read from disk, 171.2M written to disk. Jul 12 00:15:51.649132 containerd[1565]: time="2025-07-12T00:15:51.649040304Z" level=info msg="received exit event container_id:\"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\" id:\"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\" pid:3563 exited_at:{seconds:1752279351 nanos:648740371}" Jul 12 00:15:51.649440 containerd[1565]: time="2025-07-12T00:15:51.649158196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\" id:\"da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe\" pid:3563 exited_at:{seconds:1752279351 nanos:648740371}" Jul 12 00:15:51.680310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8efbb218b7ef5fd34c9f33fba53110f76abcf35d92b573e5f5c7553867adbe-rootfs.mount: Deactivated successfully. Jul 12 00:15:51.715753 kubelet[2762]: I0712 00:15:51.712292 2762 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:15:52.239440 systemd[1]: Created slice kubepods-burstable-podf8bf9582_cbe5_456f_9dad_145a35bef4ab.slice - libcontainer container kubepods-burstable-podf8bf9582_cbe5_456f_9dad_145a35bef4ab.slice. Jul 12 00:15:52.245915 systemd[1]: Created slice kubepods-besteffort-pod8afc212d_432e_4273_981e_858c04dc7166.slice - libcontainer container kubepods-besteffort-pod8afc212d_432e_4273_981e_858c04dc7166.slice. Jul 12 00:15:52.252072 kubelet[2762]: I0712 00:15:52.252019 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6fwd\" (UniqueName: \"kubernetes.io/projected/f8bf9582-cbe5-456f-9dad-145a35bef4ab-kube-api-access-z6fwd\") pod \"coredns-674b8bbfcf-7z6fh\" (UID: \"f8bf9582-cbe5-456f-9dad-145a35bef4ab\") " pod="kube-system/coredns-674b8bbfcf-7z6fh" Jul 12 00:15:52.252072 kubelet[2762]: I0712 00:15:52.252058 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghx7g\" (UniqueName: \"kubernetes.io/projected/8afc212d-432e-4273-981e-858c04dc7166-kube-api-access-ghx7g\") pod \"calico-kube-controllers-6c7c8d84b4-n9rjr\" (UID: \"8afc212d-432e-4273-981e-858c04dc7166\") " pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:15:52.252072 kubelet[2762]: I0712 00:15:52.252080 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8afc212d-432e-4273-981e-858c04dc7166-tigera-ca-bundle\") pod \"calico-kube-controllers-6c7c8d84b4-n9rjr\" (UID: \"8afc212d-432e-4273-981e-858c04dc7166\") " pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:15:52.252328 kubelet[2762]: I0712 00:15:52.252097 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8bf9582-cbe5-456f-9dad-145a35bef4ab-config-volume\") pod \"coredns-674b8bbfcf-7z6fh\" (UID: \"f8bf9582-cbe5-456f-9dad-145a35bef4ab\") " pod="kube-system/coredns-674b8bbfcf-7z6fh" Jul 12 00:15:52.469945 systemd[1]: Created slice kubepods-burstable-pod04092512_a49c_468e_9eaf_21773edfd62d.slice - libcontainer container kubepods-burstable-pod04092512_a49c_468e_9eaf_21773edfd62d.slice. Jul 12 00:15:52.480149 systemd[1]: Created slice kubepods-besteffort-poddb7a4da8_0407_419f_8801_eebf7ffb5cf2.slice - libcontainer container kubepods-besteffort-poddb7a4da8_0407_419f_8801_eebf7ffb5cf2.slice. Jul 12 00:15:52.488093 containerd[1565]: time="2025-07-12T00:15:52.488019857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:15:52.492346 systemd[1]: Created slice kubepods-besteffort-pod230d3ea2_c732_406e_9ef2_32f2ab376115.slice - libcontainer container kubepods-besteffort-pod230d3ea2_c732_406e_9ef2_32f2ab376115.slice. Jul 12 00:15:52.500271 systemd[1]: Created slice kubepods-besteffort-podaae18ec2_873b_4e28_bd0a_dfab20f1704b.slice - libcontainer container kubepods-besteffort-podaae18ec2_873b_4e28_bd0a_dfab20f1704b.slice. Jul 12 00:15:52.507105 systemd[1]: Created slice kubepods-besteffort-pod1b588b74_5695_4ab5_8362_dfbefb32b123.slice - libcontainer container kubepods-besteffort-pod1b588b74_5695_4ab5_8362_dfbefb32b123.slice. Jul 12 00:15:52.543947 kubelet[2762]: E0712 00:15:52.543671 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:52.544813 containerd[1565]: time="2025-07-12T00:15:52.544756621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z6fh,Uid:f8bf9582-cbe5-456f-9dad-145a35bef4ab,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:52.555914 kubelet[2762]: I0712 00:15:52.555383 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db7a4da8-0407-419f-8801-eebf7ffb5cf2-calico-apiserver-certs\") pod \"calico-apiserver-8564bd9cc-hp9rg\" (UID: \"db7a4da8-0407-419f-8801-eebf7ffb5cf2\") " pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" Jul 12 00:15:52.555914 kubelet[2762]: I0712 00:15:52.555577 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgpm\" (UniqueName: \"kubernetes.io/projected/1b588b74-5695-4ab5-8362-dfbefb32b123-kube-api-access-7pgpm\") pod \"calico-apiserver-8564bd9cc-frrtr\" (UID: \"1b588b74-5695-4ab5-8362-dfbefb32b123\") " pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" Jul 12 00:15:52.555914 kubelet[2762]: I0712 00:15:52.555606 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b588b74-5695-4ab5-8362-dfbefb32b123-calico-apiserver-certs\") pod \"calico-apiserver-8564bd9cc-frrtr\" (UID: \"1b588b74-5695-4ab5-8362-dfbefb32b123\") " pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" Jul 12 00:15:52.556252 kubelet[2762]: I0712 00:15:52.556230 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04092512-a49c-468e-9eaf-21773edfd62d-config-volume\") pod \"coredns-674b8bbfcf-xt68g\" (UID: \"04092512-a49c-468e-9eaf-21773edfd62d\") " pod="kube-system/coredns-674b8bbfcf-xt68g" Jul 12 00:15:52.556357 kubelet[2762]: I0712 00:15:52.556340 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/230d3ea2-c732-406e-9ef2-32f2ab376115-goldmane-key-pair\") pod \"goldmane-768f4c5c69-wz64p\" (UID: \"230d3ea2-c732-406e-9ef2-32f2ab376115\") " pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.556538 kubelet[2762]: I0712 00:15:52.556521 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxjm\" (UniqueName: \"kubernetes.io/projected/04092512-a49c-468e-9eaf-21773edfd62d-kube-api-access-lfxjm\") pod \"coredns-674b8bbfcf-xt68g\" (UID: \"04092512-a49c-468e-9eaf-21773edfd62d\") " pod="kube-system/coredns-674b8bbfcf-xt68g" Jul 12 00:15:52.556632 kubelet[2762]: I0712 00:15:52.556617 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/230d3ea2-c732-406e-9ef2-32f2ab376115-config\") pod \"goldmane-768f4c5c69-wz64p\" (UID: \"230d3ea2-c732-406e-9ef2-32f2ab376115\") " pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.557090 kubelet[2762]: I0712 00:15:52.556939 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-backend-key-pair\") pod \"whisker-64d85f4454-6nmbr\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " pod="calico-system/whisker-64d85f4454-6nmbr" Jul 12 00:15:52.557146 kubelet[2762]: I0712 00:15:52.557117 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-ca-bundle\") pod \"whisker-64d85f4454-6nmbr\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " pod="calico-system/whisker-64d85f4454-6nmbr" Jul 12 00:15:52.557198 kubelet[2762]: I0712 00:15:52.557150 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctcf\" (UniqueName: \"kubernetes.io/projected/aae18ec2-873b-4e28-bd0a-dfab20f1704b-kube-api-access-2ctcf\") pod \"whisker-64d85f4454-6nmbr\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " pod="calico-system/whisker-64d85f4454-6nmbr" Jul 12 00:15:52.557198 kubelet[2762]: I0712 00:15:52.557182 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqqz\" (UniqueName: \"kubernetes.io/projected/230d3ea2-c732-406e-9ef2-32f2ab376115-kube-api-access-bjqqz\") pod \"goldmane-768f4c5c69-wz64p\" (UID: \"230d3ea2-c732-406e-9ef2-32f2ab376115\") " pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.557289 kubelet[2762]: I0712 00:15:52.557229 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pgv\" (UniqueName: \"kubernetes.io/projected/db7a4da8-0407-419f-8801-eebf7ffb5cf2-kube-api-access-v5pgv\") pod \"calico-apiserver-8564bd9cc-hp9rg\" (UID: \"db7a4da8-0407-419f-8801-eebf7ffb5cf2\") " pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" Jul 12 00:15:52.557289 kubelet[2762]: I0712 00:15:52.557262 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/230d3ea2-c732-406e-9ef2-32f2ab376115-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-wz64p\" (UID: \"230d3ea2-c732-406e-9ef2-32f2ab376115\") " pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.557557 containerd[1565]: time="2025-07-12T00:15:52.557505113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:52.723350 containerd[1565]: time="2025-07-12T00:15:52.723255969Z" level=error msg="Failed to destroy network for sandbox \"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.726940 systemd[1]: run-netns-cni\x2d9f3e6d69\x2d14d5\x2d35f3\x2db3a5\x2d0231d9d8519e.mount: Deactivated successfully. Jul 12 00:15:52.728514 containerd[1565]: time="2025-07-12T00:15:52.728441158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z6fh,Uid:f8bf9582-cbe5-456f-9dad-145a35bef4ab,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.732615 containerd[1565]: time="2025-07-12T00:15:52.732539384Z" level=error msg="Failed to destroy network for sandbox \"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.734253 containerd[1565]: time="2025-07-12T00:15:52.734168313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.736200 systemd[1]: run-netns-cni\x2dccfa8f4e\x2deb3a\x2d0c71\x2d91c8\x2d7dd91a172d49.mount: Deactivated successfully. Jul 12 00:15:52.746535 kubelet[2762]: E0712 00:15:52.746370 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.746535 kubelet[2762]: E0712 00:15:52.746470 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7z6fh" Jul 12 00:15:52.746535 kubelet[2762]: E0712 00:15:52.746504 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7z6fh" Jul 12 00:15:52.746734 kubelet[2762]: E0712 00:15:52.746569 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7z6fh_kube-system(f8bf9582-cbe5-456f-9dad-145a35bef4ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7z6fh_kube-system(f8bf9582-cbe5-456f-9dad-145a35bef4ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f22a3bb7552577cd21d84a742b30f244c9f722b37126765f98020ec3b41b4d31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7z6fh" podUID="f8bf9582-cbe5-456f-9dad-145a35bef4ab" Jul 12 00:15:52.747184 kubelet[2762]: E0712 00:15:52.746370 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.747345 kubelet[2762]: E0712 00:15:52.747282 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:15:52.747435 kubelet[2762]: E0712 00:15:52.747368 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:15:52.747586 kubelet[2762]: E0712 00:15:52.747530 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c7c8d84b4-n9rjr_calico-system(8afc212d-432e-4273-981e-858c04dc7166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c7c8d84b4-n9rjr_calico-system(8afc212d-432e-4273-981e-858c04dc7166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de87150cab5e1915b2e2d3893dec7565185db9720c607e087b48b2a654c17c2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" podUID="8afc212d-432e-4273-981e-858c04dc7166" Jul 12 00:15:52.777695 kubelet[2762]: E0712 00:15:52.777560 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:15:52.778264 containerd[1565]: time="2025-07-12T00:15:52.778200057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xt68g,Uid:04092512-a49c-468e-9eaf-21773edfd62d,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:52.788051 containerd[1565]: time="2025-07-12T00:15:52.787887741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-hp9rg,Uid:db7a4da8-0407-419f-8801-eebf7ffb5cf2,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:52.797390 containerd[1565]: time="2025-07-12T00:15:52.797328500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-wz64p,Uid:230d3ea2-c732-406e-9ef2-32f2ab376115,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:52.805061 containerd[1565]: time="2025-07-12T00:15:52.804911221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d85f4454-6nmbr,Uid:aae18ec2-873b-4e28-bd0a-dfab20f1704b,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:52.812369 containerd[1565]: time="2025-07-12T00:15:52.812298675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-frrtr,Uid:1b588b74-5695-4ab5-8362-dfbefb32b123,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:15:52.893012 containerd[1565]: time="2025-07-12T00:15:52.892583303Z" level=error msg="Failed to destroy network for sandbox \"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.901314 containerd[1565]: time="2025-07-12T00:15:52.901234800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xt68g,Uid:04092512-a49c-468e-9eaf-21773edfd62d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.901655 kubelet[2762]: E0712 00:15:52.901601 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.901757 kubelet[2762]: E0712 00:15:52.901672 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xt68g" Jul 12 00:15:52.901757 kubelet[2762]: E0712 00:15:52.901702 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xt68g" Jul 12 00:15:52.902832 kubelet[2762]: E0712 00:15:52.901800 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xt68g_kube-system(04092512-a49c-468e-9eaf-21773edfd62d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xt68g_kube-system(04092512-a49c-468e-9eaf-21773edfd62d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9413985663dc2fee3d8b386137d29155e3dfdeae45b1f28890c572d4bf6a6d0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xt68g" podUID="04092512-a49c-468e-9eaf-21773edfd62d" Jul 12 00:15:52.905712 containerd[1565]: time="2025-07-12T00:15:52.905400233Z" level=error msg="Failed to destroy network for sandbox \"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.910253 containerd[1565]: time="2025-07-12T00:15:52.910190760Z" level=error msg="Failed to destroy network for sandbox \"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.912852 containerd[1565]: time="2025-07-12T00:15:52.912795081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-wz64p,Uid:230d3ea2-c732-406e-9ef2-32f2ab376115,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.913571 kubelet[2762]: E0712 00:15:52.913239 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.913571 kubelet[2762]: E0712 00:15:52.913325 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.913571 kubelet[2762]: E0712 00:15:52.913353 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-wz64p" Jul 12 00:15:52.913731 kubelet[2762]: E0712 00:15:52.913436 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-wz64p_calico-system(230d3ea2-c732-406e-9ef2-32f2ab376115)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-wz64p_calico-system(230d3ea2-c732-406e-9ef2-32f2ab376115)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afcd34e0116bd9a6ab4e74bf6886ff329c2a7db5c630b52962a16f3a245968c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-wz64p" podUID="230d3ea2-c732-406e-9ef2-32f2ab376115" Jul 12 00:15:52.915236 containerd[1565]: time="2025-07-12T00:15:52.915194057Z" level=error msg="Failed to destroy network for sandbox \"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.915467 containerd[1565]: time="2025-07-12T00:15:52.915411825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d85f4454-6nmbr,Uid:aae18ec2-873b-4e28-bd0a-dfab20f1704b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.916136 kubelet[2762]: E0712 00:15:52.916091 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.916197 kubelet[2762]: E0712 00:15:52.916142 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d85f4454-6nmbr" Jul 12 00:15:52.916197 kubelet[2762]: E0712 00:15:52.916167 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64d85f4454-6nmbr" Jul 12 00:15:52.916271 kubelet[2762]: E0712 00:15:52.916221 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64d85f4454-6nmbr_calico-system(aae18ec2-873b-4e28-bd0a-dfab20f1704b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64d85f4454-6nmbr_calico-system(aae18ec2-873b-4e28-bd0a-dfab20f1704b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"293c6e889f82f11410a1639e9c409f5ca897139048944a554d27a0920b111189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64d85f4454-6nmbr" podUID="aae18ec2-873b-4e28-bd0a-dfab20f1704b" Jul 12 00:15:52.916942 containerd[1565]: time="2025-07-12T00:15:52.916897946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-hp9rg,Uid:db7a4da8-0407-419f-8801-eebf7ffb5cf2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.917554 kubelet[2762]: E0712 00:15:52.917500 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.917611 kubelet[2762]: E0712 00:15:52.917550 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" Jul 12 00:15:52.917611 kubelet[2762]: E0712 00:15:52.917574 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" Jul 12 00:15:52.917687 kubelet[2762]: E0712 00:15:52.917618 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8564bd9cc-hp9rg_calico-apiserver(db7a4da8-0407-419f-8801-eebf7ffb5cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8564bd9cc-hp9rg_calico-apiserver(db7a4da8-0407-419f-8801-eebf7ffb5cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95a5f1206fd8043da505c601e18fae1ccb06484673d9199a823f2b10148dc10d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" podUID="db7a4da8-0407-419f-8801-eebf7ffb5cf2" Jul 12 00:15:52.921712 containerd[1565]: time="2025-07-12T00:15:52.921659308Z" level=error msg="Failed to destroy network for sandbox \"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.924638 containerd[1565]: time="2025-07-12T00:15:52.924598498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-frrtr,Uid:1b588b74-5695-4ab5-8362-dfbefb32b123,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.924860 kubelet[2762]: E0712 00:15:52.924810 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:52.925062 kubelet[2762]: E0712 00:15:52.924880 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" Jul 12 00:15:52.925062 kubelet[2762]: E0712 00:15:52.924905 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" Jul 12 00:15:52.925062 kubelet[2762]: E0712 00:15:52.924963 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8564bd9cc-frrtr_calico-apiserver(1b588b74-5695-4ab5-8362-dfbefb32b123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8564bd9cc-frrtr_calico-apiserver(1b588b74-5695-4ab5-8362-dfbefb32b123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"267140fc5d861563a5069c5faa1e62c5742b37a3ee62c415eaaa7b54be9c9744\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" podUID="1b588b74-5695-4ab5-8362-dfbefb32b123" Jul 12 00:15:53.392910 systemd[1]: Created slice kubepods-besteffort-pode8b79a6e_adae_4c2c_97d6_d0ae42d1daf2.slice - libcontainer container kubepods-besteffort-pode8b79a6e_adae_4c2c_97d6_d0ae42d1daf2.slice. Jul 12 00:15:53.396014 containerd[1565]: time="2025-07-12T00:15:53.395947722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcrj6,Uid:e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2,Namespace:calico-system,Attempt:0,}" Jul 12 00:15:53.452503 containerd[1565]: time="2025-07-12T00:15:53.452436824Z" level=error msg="Failed to destroy network for sandbox \"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:53.454144 containerd[1565]: time="2025-07-12T00:15:53.454092874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcrj6,Uid:e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:53.454432 kubelet[2762]: E0712 00:15:53.454370 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:15:53.454516 kubelet[2762]: E0712 00:15:53.454441 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:53.454516 kubelet[2762]: E0712 00:15:53.454472 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tcrj6" Jul 12 00:15:53.454594 kubelet[2762]: E0712 00:15:53.454524 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tcrj6_calico-system(e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tcrj6_calico-system(e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffaa4470033f2baf44976a760ac0ec1a76567131508cdb60adcd2edb480fcb47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tcrj6" podUID="e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2" Jul 12 00:16:02.033488 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:42448.service - OpenSSH per-connection server daemon (10.0.0.1:42448). Jul 12 00:16:02.783629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963323804.mount: Deactivated successfully. Jul 12 00:16:02.936621 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 42448 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:02.938420 sshd-session[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:02.948708 systemd-logind[1543]: New session 10 of user core. Jul 12 00:16:02.959193 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:16:03.303133 sshd[3877]: Connection closed by 10.0.0.1 port 42448 Jul 12 00:16:03.303742 sshd-session[3875]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:03.306753 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:42448.service: Deactivated successfully. Jul 12 00:16:03.309146 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:16:03.312583 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:16:03.314133 systemd-logind[1543]: Removed session 10. Jul 12 00:16:03.386385 containerd[1565]: time="2025-07-12T00:16:03.386327144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,}" Jul 12 00:16:03.780195 containerd[1565]: time="2025-07-12T00:16:03.780139104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:03.781828 containerd[1565]: time="2025-07-12T00:16:03.781784863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 12 00:16:03.783471 containerd[1565]: time="2025-07-12T00:16:03.783447815Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:03.785753 containerd[1565]: time="2025-07-12T00:16:03.785705442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:03.786310 containerd[1565]: time="2025-07-12T00:16:03.786264922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.298186464s" Jul 12 00:16:03.786310 containerd[1565]: time="2025-07-12T00:16:03.786311099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 12 00:16:03.818278 containerd[1565]: time="2025-07-12T00:16:03.817752656Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:16:03.833391 containerd[1565]: time="2025-07-12T00:16:03.833333441Z" level=info msg="Container 6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:03.837360 containerd[1565]: time="2025-07-12T00:16:03.837303433Z" level=error msg="Failed to destroy network for sandbox \"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:16:03.840495 systemd[1]: run-netns-cni\x2d6f969543\x2db685\x2dbb73\x2db2ba\x2dea61c2142981.mount: Deactivated successfully. Jul 12 00:16:03.852626 containerd[1565]: time="2025-07-12T00:16:03.852556705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:16:03.853901 kubelet[2762]: E0712 00:16:03.853848 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:16:03.854417 kubelet[2762]: E0712 00:16:03.853931 2762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:16:03.854417 kubelet[2762]: E0712 00:16:03.853956 2762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" Jul 12 00:16:03.854417 kubelet[2762]: E0712 00:16:03.854047 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c7c8d84b4-n9rjr_calico-system(8afc212d-432e-4273-981e-858c04dc7166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c7c8d84b4-n9rjr_calico-system(8afc212d-432e-4273-981e-858c04dc7166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"105cef5de5d4f9c0e9fc74af3d63cda1855cb98bf7b9a770210e681357c7a364\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" podUID="8afc212d-432e-4273-981e-858c04dc7166" Jul 12 00:16:03.862213 containerd[1565]: time="2025-07-12T00:16:03.862156003Z" level=info msg="CreateContainer within sandbox \"7e1e5a297ad159273ef74f5090be23d78181a54621c598bd2f1ee918927168ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\"" Jul 12 00:16:03.863833 containerd[1565]: time="2025-07-12T00:16:03.862659588Z" level=info msg="StartContainer for \"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\"" Jul 12 00:16:03.864515 containerd[1565]: time="2025-07-12T00:16:03.864475326Z" level=info msg="connecting to shim 6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97" address="unix:///run/containerd/s/5e44c1bcfb1fe6567835aa469aa3b7edbc12aac5a0f6d53171ebb9369f794e59" protocol=ttrpc version=3 Jul 12 00:16:03.994288 systemd[1]: Started cri-containerd-6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97.scope - libcontainer container 6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97. Jul 12 00:16:04.076293 containerd[1565]: time="2025-07-12T00:16:04.076072147Z" level=info msg="StartContainer for \"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\" returns successfully" Jul 12 00:16:04.145030 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:16:04.145194 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:16:04.335524 kubelet[2762]: I0712 00:16:04.335327 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-backend-key-pair\") pod \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " Jul 12 00:16:04.335524 kubelet[2762]: I0712 00:16:04.335400 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctcf\" (UniqueName: \"kubernetes.io/projected/aae18ec2-873b-4e28-bd0a-dfab20f1704b-kube-api-access-2ctcf\") pod \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " Jul 12 00:16:04.335524 kubelet[2762]: I0712 00:16:04.335425 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-ca-bundle\") pod \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\" (UID: \"aae18ec2-873b-4e28-bd0a-dfab20f1704b\") " Jul 12 00:16:04.336676 kubelet[2762]: I0712 00:16:04.336618 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aae18ec2-873b-4e28-bd0a-dfab20f1704b" (UID: "aae18ec2-873b-4e28-bd0a-dfab20f1704b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:16:04.340556 kubelet[2762]: I0712 00:16:04.340500 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae18ec2-873b-4e28-bd0a-dfab20f1704b-kube-api-access-2ctcf" (OuterVolumeSpecName: "kube-api-access-2ctcf") pod "aae18ec2-873b-4e28-bd0a-dfab20f1704b" (UID: "aae18ec2-873b-4e28-bd0a-dfab20f1704b"). InnerVolumeSpecName "kube-api-access-2ctcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:16:04.341756 kubelet[2762]: I0712 00:16:04.341302 2762 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aae18ec2-873b-4e28-bd0a-dfab20f1704b" (UID: "aae18ec2-873b-4e28-bd0a-dfab20f1704b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:16:04.387007 containerd[1565]: time="2025-07-12T00:16:04.386645797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-frrtr,Uid:1b588b74-5695-4ab5-8362-dfbefb32b123,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:16:04.394714 systemd[1]: Removed slice kubepods-besteffort-podaae18ec2_873b_4e28_bd0a_dfab20f1704b.slice - libcontainer container kubepods-besteffort-podaae18ec2_873b_4e28_bd0a_dfab20f1704b.slice. Jul 12 00:16:04.435955 kubelet[2762]: I0712 00:16:04.435896 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:16:04.435955 kubelet[2762]: I0712 00:16:04.435926 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ctcf\" (UniqueName: \"kubernetes.io/projected/aae18ec2-873b-4e28-bd0a-dfab20f1704b-kube-api-access-2ctcf\") on node \"localhost\" DevicePath \"\"" Jul 12 00:16:04.435955 kubelet[2762]: I0712 00:16:04.435935 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae18ec2-873b-4e28-bd0a-dfab20f1704b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:16:04.452654 kubelet[2762]: E0712 00:16:04.452595 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:04.716488 kubelet[2762]: E0712 00:16:04.716397 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:04.742125 kubelet[2762]: I0712 00:16:04.741921 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p78mq" podStartSLOduration=1.710026997 podStartE2EDuration="23.741865334s" podCreationTimestamp="2025-07-12 00:15:41 +0000 UTC" firstStartedPulling="2025-07-12 00:15:41.755619194 +0000 UTC m=+21.478599796" lastFinishedPulling="2025-07-12 00:16:03.787457531 +0000 UTC m=+43.510438133" observedRunningTime="2025-07-12 00:16:04.740386468 +0000 UTC m=+44.463367070" watchObservedRunningTime="2025-07-12 00:16:04.741865334 +0000 UTC m=+44.464845936" Jul 12 00:16:04.780130 systemd-networkd[1476]: calidff13512c37: Link UP Jul 12 00:16:04.780881 systemd-networkd[1476]: calidff13512c37: Gained carrier Jul 12 00:16:04.795740 systemd[1]: var-lib-kubelet-pods-aae18ec2\x2d873b\x2d4e28\x2dbd0a\x2ddfab20f1704b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ctcf.mount: Deactivated successfully. Jul 12 00:16:04.796173 systemd[1]: var-lib-kubelet-pods-aae18ec2\x2d873b\x2d4e28\x2dbd0a\x2ddfab20f1704b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:16:04.805286 containerd[1565]: 2025-07-12 00:16:04.510 [INFO][3985] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:16:04.805286 containerd[1565]: 2025-07-12 00:16:04.543 [INFO][3985] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0 calico-apiserver-8564bd9cc- calico-apiserver 1b588b74-5695-4ab5-8362-dfbefb32b123 896 0 2025-07-12 00:15:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8564bd9cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8564bd9cc-frrtr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidff13512c37 [] [] }} ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-" Jul 12 00:16:04.805286 containerd[1565]: 2025-07-12 00:16:04.543 [INFO][3985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.805286 containerd[1565]: 2025-07-12 00:16:04.718 [INFO][4002] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" HandleID="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Workload="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.722 [INFO][4002] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" HandleID="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Workload="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8564bd9cc-frrtr", "timestamp":"2025-07-12 00:16:04.718650075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.722 [INFO][4002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.722 [INFO][4002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.722 [INFO][4002] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.732 [INFO][4002] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" host="localhost" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.743 [INFO][4002] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.751 [INFO][4002] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.754 [INFO][4002] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.756 [INFO][4002] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:04.805587 containerd[1565]: 2025-07-12 00:16:04.756 [INFO][4002] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" host="localhost" Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.757 [INFO][4002] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000 Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.761 [INFO][4002] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" host="localhost" Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.767 [INFO][4002] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" host="localhost" Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.767 [INFO][4002] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" host="localhost" Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.767 [INFO][4002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:04.805957 containerd[1565]: 2025-07-12 00:16:04.767 [INFO][4002] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" HandleID="k8s-pod-network.ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Workload="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.806107 containerd[1565]: 2025-07-12 00:16:04.771 [INFO][3985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0", GenerateName:"calico-apiserver-8564bd9cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b588b74-5695-4ab5-8362-dfbefb32b123", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8564bd9cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8564bd9cc-frrtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidff13512c37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:04.806163 containerd[1565]: 2025-07-12 00:16:04.771 [INFO][3985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.806163 containerd[1565]: 2025-07-12 00:16:04.771 [INFO][3985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidff13512c37 ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.806163 containerd[1565]: 2025-07-12 00:16:04.781 [INFO][3985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.806232 containerd[1565]: 2025-07-12 00:16:04.782 [INFO][3985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0", GenerateName:"calico-apiserver-8564bd9cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b588b74-5695-4ab5-8362-dfbefb32b123", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8564bd9cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000", Pod:"calico-apiserver-8564bd9cc-frrtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidff13512c37", MAC:"96:a2:ac:13:e9:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:04.806294 containerd[1565]: 2025-07-12 00:16:04.796 [INFO][3985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-frrtr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--frrtr-eth0" Jul 12 00:16:04.813106 systemd[1]: Created slice kubepods-besteffort-pod6cab07a0_a39b_4891_b9ff_eb2e0a73ad74.slice - libcontainer container kubepods-besteffort-pod6cab07a0_a39b_4891_b9ff_eb2e0a73ad74.slice. Jul 12 00:16:04.842092 kubelet[2762]: I0712 00:16:04.841939 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab07a0-a39b-4891-b9ff-eb2e0a73ad74-whisker-ca-bundle\") pod \"whisker-864ffdf774-qlv67\" (UID: \"6cab07a0-a39b-4891-b9ff-eb2e0a73ad74\") " pod="calico-system/whisker-864ffdf774-qlv67" Jul 12 00:16:04.842092 kubelet[2762]: I0712 00:16:04.842008 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cab07a0-a39b-4891-b9ff-eb2e0a73ad74-whisker-backend-key-pair\") pod \"whisker-864ffdf774-qlv67\" (UID: \"6cab07a0-a39b-4891-b9ff-eb2e0a73ad74\") " pod="calico-system/whisker-864ffdf774-qlv67" Jul 12 00:16:04.842092 kubelet[2762]: I0712 00:16:04.842028 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jm5\" (UniqueName: \"kubernetes.io/projected/6cab07a0-a39b-4891-b9ff-eb2e0a73ad74-kube-api-access-l6jm5\") pod \"whisker-864ffdf774-qlv67\" (UID: \"6cab07a0-a39b-4891-b9ff-eb2e0a73ad74\") " pod="calico-system/whisker-864ffdf774-qlv67" Jul 12 00:16:04.916305 containerd[1565]: time="2025-07-12T00:16:04.916254375Z" level=info msg="connecting to shim ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000" address="unix:///run/containerd/s/e04031e2cadd4757c39eaf08fa55c06a3160243f18c987be3f4979cfaf71d7d6" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:04.948432 systemd[1]: Started cri-containerd-ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000.scope - libcontainer container ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000. Jul 12 00:16:04.968312 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:05.004385 containerd[1565]: time="2025-07-12T00:16:05.004340240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-frrtr,Uid:1b588b74-5695-4ab5-8362-dfbefb32b123,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000\"" Jul 12 00:16:05.009482 containerd[1565]: time="2025-07-12T00:16:05.009451025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:16:05.117951 containerd[1565]: time="2025-07-12T00:16:05.117897292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864ffdf774-qlv67,Uid:6cab07a0-a39b-4891-b9ff-eb2e0a73ad74,Namespace:calico-system,Attempt:0,}" Jul 12 00:16:05.309513 systemd-networkd[1476]: calif95bb303bf5: Link UP Jul 12 00:16:05.310557 systemd-networkd[1476]: calif95bb303bf5: Gained carrier Jul 12 00:16:05.355375 containerd[1565]: 2025-07-12 00:16:05.140 [INFO][4081] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:16:05.355375 containerd[1565]: 2025-07-12 00:16:05.150 [INFO][4081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--864ffdf774--qlv67-eth0 whisker-864ffdf774- calico-system 6cab07a0-a39b-4891-b9ff-eb2e0a73ad74 1020 0 2025-07-12 00:16:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:864ffdf774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-864ffdf774-qlv67 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif95bb303bf5 [] [] }} ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-" Jul 12 00:16:05.355375 containerd[1565]: 2025-07-12 00:16:05.150 [INFO][4081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.355375 containerd[1565]: 2025-07-12 00:16:05.178 [INFO][4090] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" HandleID="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Workload="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.179 [INFO][4090] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" HandleID="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Workload="localhost-k8s-whisker--864ffdf774--qlv67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-864ffdf774-qlv67", "timestamp":"2025-07-12 00:16:05.178888073 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.179 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.179 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.179 [INFO][4090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.187 [INFO][4090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" host="localhost" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.193 [INFO][4090] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.199 [INFO][4090] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.200 [INFO][4090] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.203 [INFO][4090] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:05.355742 containerd[1565]: 2025-07-12 00:16:05.203 [INFO][4090] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" host="localhost" Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.205 [INFO][4090] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.223 [INFO][4090] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" host="localhost" Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.304 [INFO][4090] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" host="localhost" Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.304 [INFO][4090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" host="localhost" Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.304 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:05.356098 containerd[1565]: 2025-07-12 00:16:05.304 [INFO][4090] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" HandleID="k8s-pod-network.ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Workload="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.356291 containerd[1565]: 2025-07-12 00:16:05.307 [INFO][4081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--864ffdf774--qlv67-eth0", GenerateName:"whisker-864ffdf774-", Namespace:"calico-system", SelfLink:"", UID:"6cab07a0-a39b-4891-b9ff-eb2e0a73ad74", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864ffdf774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-864ffdf774-qlv67", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif95bb303bf5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:05.356291 containerd[1565]: 2025-07-12 00:16:05.307 [INFO][4081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.356399 containerd[1565]: 2025-07-12 00:16:05.307 [INFO][4081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif95bb303bf5 ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.356399 containerd[1565]: 2025-07-12 00:16:05.310 [INFO][4081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.356460 containerd[1565]: 2025-07-12 00:16:05.310 [INFO][4081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--864ffdf774--qlv67-eth0", GenerateName:"whisker-864ffdf774-", Namespace:"calico-system", SelfLink:"", UID:"6cab07a0-a39b-4891-b9ff-eb2e0a73ad74", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 16, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"864ffdf774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca", Pod:"whisker-864ffdf774-qlv67", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif95bb303bf5", MAC:"ae:8c:45:37:14:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:05.356555 containerd[1565]: 2025-07-12 00:16:05.351 [INFO][4081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" Namespace="calico-system" Pod="whisker-864ffdf774-qlv67" WorkloadEndpoint="localhost-k8s-whisker--864ffdf774--qlv67-eth0" Jul 12 00:16:05.391065 containerd[1565]: time="2025-07-12T00:16:05.390955076Z" level=info msg="connecting to shim ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca" address="unix:///run/containerd/s/e9921aced444ae38c2613971a8617db029bc699bf19efaf2cc6b179e1ec00b98" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:05.432301 systemd[1]: Started cri-containerd-ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca.scope - libcontainer container ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca. Jul 12 00:16:05.446762 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:05.479553 containerd[1565]: time="2025-07-12T00:16:05.479506520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864ffdf774-qlv67,Uid:6cab07a0-a39b-4891-b9ff-eb2e0a73ad74,Namespace:calico-system,Attempt:0,} returns sandbox id \"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca\"" Jul 12 00:16:05.950333 systemd-networkd[1476]: calidff13512c37: Gained IPv6LL Jul 12 00:16:06.152924 systemd-networkd[1476]: vxlan.calico: Link UP Jul 12 00:16:06.152936 systemd-networkd[1476]: vxlan.calico: Gained carrier Jul 12 00:16:06.178547 kubelet[2762]: I0712 00:16:06.178508 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:16:06.385369 containerd[1565]: time="2025-07-12T00:16:06.385220497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\" id:\"8a907ed3f01fcf3109d34cb900d6d869041b03341cbb6850f7e5c4b0a1a89203\" pid:4330 exit_status:1 exited_at:{seconds:1752279366 nanos:384749855}" Jul 12 00:16:06.387040 kubelet[2762]: E0712 00:16:06.385785 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:06.387522 containerd[1565]: time="2025-07-12T00:16:06.387469065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xt68g,Uid:04092512-a49c-468e-9eaf-21773edfd62d,Namespace:kube-system,Attempt:0,}" Jul 12 00:16:06.442955 kubelet[2762]: I0712 00:16:06.442912 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae18ec2-873b-4e28-bd0a-dfab20f1704b" path="/var/lib/kubelet/pods/aae18ec2-873b-4e28-bd0a-dfab20f1704b/volumes" Jul 12 00:16:06.534951 containerd[1565]: time="2025-07-12T00:16:06.534897720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\" id:\"2f391863a8ccc0cab73e868f34b6d050be3c0c1c790c64b7a094efcf7c2db92c\" pid:4356 exit_status:1 exited_at:{seconds:1752279366 nanos:533065459}" Jul 12 00:16:06.588771 systemd-networkd[1476]: cali4ae4dbcb8f0: Link UP Jul 12 00:16:06.589590 systemd-networkd[1476]: cali4ae4dbcb8f0: Gained carrier Jul 12 00:16:06.605568 containerd[1565]: 2025-07-12 00:16:06.497 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xt68g-eth0 coredns-674b8bbfcf- kube-system 04092512-a49c-468e-9eaf-21773edfd62d 891 0 2025-07-12 00:15:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xt68g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ae4dbcb8f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-" Jul 12 00:16:06.605568 containerd[1565]: 2025-07-12 00:16:06.497 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.605568 containerd[1565]: 2025-07-12 00:16:06.540 [INFO][4397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" HandleID="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Workload="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.541 [INFO][4397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" HandleID="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Workload="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xt68g", "timestamp":"2025-07-12 00:16:06.540920943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.541 [INFO][4397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.541 [INFO][4397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.541 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.551 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" host="localhost" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.559 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.564 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.566 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.568 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:06.605758 containerd[1565]: 2025-07-12 00:16:06.568 [INFO][4397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" host="localhost" Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.569 [INFO][4397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.574 [INFO][4397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" host="localhost" Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.580 [INFO][4397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" host="localhost" Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.580 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" host="localhost" Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.580 [INFO][4397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:06.606040 containerd[1565]: 2025-07-12 00:16:06.580 [INFO][4397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" HandleID="k8s-pod-network.92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Workload="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.606163 containerd[1565]: 2025-07-12 00:16:06.586 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xt68g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04092512-a49c-468e-9eaf-21773edfd62d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xt68g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ae4dbcb8f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:06.606244 containerd[1565]: 2025-07-12 00:16:06.586 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.606244 containerd[1565]: 2025-07-12 00:16:06.586 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ae4dbcb8f0 ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.606244 containerd[1565]: 2025-07-12 00:16:06.590 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.606315 containerd[1565]: 2025-07-12 00:16:06.591 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xt68g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"04092512-a49c-468e-9eaf-21773edfd62d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce", Pod:"coredns-674b8bbfcf-xt68g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ae4dbcb8f0", MAC:"ba:45:ab:a8:e7:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:06.606315 containerd[1565]: 2025-07-12 00:16:06.601 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" Namespace="kube-system" Pod="coredns-674b8bbfcf-xt68g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xt68g-eth0" Jul 12 00:16:06.637775 containerd[1565]: time="2025-07-12T00:16:06.637614321Z" level=info msg="connecting to shim 92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce" address="unix:///run/containerd/s/7ee53170b22dcd387cfa5bf70ccd7d2ec36da15250281764823e6da45ca3d170" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:06.678162 systemd[1]: Started cri-containerd-92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce.scope - libcontainer container 92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce. Jul 12 00:16:06.696774 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:06.734333 containerd[1565]: time="2025-07-12T00:16:06.734273873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xt68g,Uid:04092512-a49c-468e-9eaf-21773edfd62d,Namespace:kube-system,Attempt:0,} returns sandbox id \"92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce\"" Jul 12 00:16:06.735345 kubelet[2762]: E0712 00:16:06.735305 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:06.743949 containerd[1565]: time="2025-07-12T00:16:06.743899206Z" level=info msg="CreateContainer within sandbox \"92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:16:06.765585 containerd[1565]: time="2025-07-12T00:16:06.765501834Z" level=info msg="Container 7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:06.777312 containerd[1565]: time="2025-07-12T00:16:06.777263999Z" level=info msg="CreateContainer within sandbox \"92bfcf3b426c7012d5ccfbb81f7277d0c2f4183bcf0ffcc8222aa33e70ba15ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f\"" Jul 12 00:16:06.777918 containerd[1565]: time="2025-07-12T00:16:06.777880354Z" level=info msg="StartContainer for \"7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f\"" Jul 12 00:16:06.779022 containerd[1565]: time="2025-07-12T00:16:06.778996748Z" level=info msg="connecting to shim 7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f" address="unix:///run/containerd/s/7ee53170b22dcd387cfa5bf70ccd7d2ec36da15250281764823e6da45ca3d170" protocol=ttrpc version=3 Jul 12 00:16:06.796657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928292049.mount: Deactivated successfully. Jul 12 00:16:06.810212 systemd[1]: Started cri-containerd-7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f.scope - libcontainer container 7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f. Jul 12 00:16:06.857333 containerd[1565]: time="2025-07-12T00:16:06.857284252Z" level=info msg="StartContainer for \"7697a2877755abfcacab2db17d0c307f17f8bd13e175882ed40713d6e4b0170f\" returns successfully" Jul 12 00:16:06.910163 systemd-networkd[1476]: calif95bb303bf5: Gained IPv6LL Jul 12 00:16:07.385826 kubelet[2762]: E0712 00:16:07.385494 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:07.386289 containerd[1565]: time="2025-07-12T00:16:07.385597594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-hp9rg,Uid:db7a4da8-0407-419f-8801-eebf7ffb5cf2,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:16:07.386728 containerd[1565]: time="2025-07-12T00:16:07.386691933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z6fh,Uid:f8bf9582-cbe5-456f-9dad-145a35bef4ab,Namespace:kube-system,Attempt:0,}" Jul 12 00:16:07.387000 containerd[1565]: time="2025-07-12T00:16:07.386938540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-wz64p,Uid:230d3ea2-c732-406e-9ef2-32f2ab376115,Namespace:calico-system,Attempt:0,}" Jul 12 00:16:07.542942 systemd-networkd[1476]: calib6a2df2c2e2: Link UP Jul 12 00:16:07.543938 systemd-networkd[1476]: calib6a2df2c2e2: Gained carrier Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.444 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0 coredns-674b8bbfcf- kube-system f8bf9582-cbe5-456f-9dad-145a35bef4ab 889 0 2025-07-12 00:15:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7z6fh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6a2df2c2e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.444 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.496 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" HandleID="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Workload="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.496 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" HandleID="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Workload="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003431b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7z6fh", "timestamp":"2025-07-12 00:16:07.496277286 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.496 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.496 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.496 [INFO][4569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.508 [INFO][4569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.515 [INFO][4569] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.519 [INFO][4569] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.521 [INFO][4569] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.522 [INFO][4569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.522 [INFO][4569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.524 [INFO][4569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.528 [INFO][4569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.533 [INFO][4569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.533 [INFO][4569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" host="localhost" Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.533 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:07.568102 containerd[1565]: 2025-07-12 00:16:07.533 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" HandleID="k8s-pod-network.9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Workload="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.538 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f8bf9582-cbe5-456f-9dad-145a35bef4ab", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7z6fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6a2df2c2e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.538 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.538 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6a2df2c2e2 ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.545 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.546 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f8bf9582-cbe5-456f-9dad-145a35bef4ab", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea", Pod:"coredns-674b8bbfcf-7z6fh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6a2df2c2e2", MAC:"be:35:52:2b:05:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:07.569281 containerd[1565]: 2025-07-12 00:16:07.559 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" Namespace="kube-system" Pod="coredns-674b8bbfcf-7z6fh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7z6fh-eth0" Jul 12 00:16:07.679192 systemd-networkd[1476]: vxlan.calico: Gained IPv6LL Jul 12 00:16:07.689603 containerd[1565]: time="2025-07-12T00:16:07.689531433Z" level=info msg="connecting to shim 9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea" address="unix:///run/containerd/s/26c7bda8f2b0961f6b658813229314e09ee271e6c6d5315a8aa8056e9650c7c3" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:07.727149 systemd[1]: Started cri-containerd-9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea.scope - libcontainer container 9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea. Jul 12 00:16:07.742129 systemd-networkd[1476]: cali4ae4dbcb8f0: Gained IPv6LL Jul 12 00:16:07.747709 kubelet[2762]: E0712 00:16:07.747605 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:07.749875 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:07.903696 containerd[1565]: time="2025-07-12T00:16:07.903632635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7z6fh,Uid:f8bf9582-cbe5-456f-9dad-145a35bef4ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea\"" Jul 12 00:16:07.904536 kubelet[2762]: E0712 00:16:07.904494 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:07.943368 kubelet[2762]: I0712 00:16:07.943106 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xt68g" podStartSLOduration=40.943037447 podStartE2EDuration="40.943037447s" podCreationTimestamp="2025-07-12 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:07.942711886 +0000 UTC m=+47.665692488" watchObservedRunningTime="2025-07-12 00:16:07.943037447 +0000 UTC m=+47.666018049" Jul 12 00:16:07.954954 systemd-networkd[1476]: cali75f993636e0: Link UP Jul 12 00:16:07.955213 systemd-networkd[1476]: cali75f993636e0: Gained carrier Jul 12 00:16:07.965443 containerd[1565]: time="2025-07-12T00:16:07.965382925Z" level=info msg="CreateContainer within sandbox \"9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.474 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--wz64p-eth0 goldmane-768f4c5c69- calico-system 230d3ea2-c732-406e-9ef2-32f2ab376115 894 0 2025-07-12 00:15:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-wz64p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali75f993636e0 [] [] }} ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.474 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.514 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" HandleID="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Workload="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.515 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" HandleID="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Workload="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-wz64p", "timestamp":"2025-07-12 00:16:07.514745578 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.515 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.534 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.534 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.653 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.663 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.671 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.674 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.681 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.682 [INFO][4586] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.700 [INFO][4586] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264 Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.745 [INFO][4586] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.942 [INFO][4586] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.942 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" host="localhost" Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.943 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:08.005323 containerd[1565]: 2025-07-12 00:16:07.943 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" HandleID="k8s-pod-network.490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Workload="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.947 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--wz64p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"230d3ea2-c732-406e-9ef2-32f2ab376115", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-wz64p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75f993636e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.948 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.948 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75f993636e0 ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.951 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.951 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--wz64p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"230d3ea2-c732-406e-9ef2-32f2ab376115", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264", Pod:"goldmane-768f4c5c69-wz64p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75f993636e0", MAC:"a2:71:3c:04:a3:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:08.006673 containerd[1565]: 2025-07-12 00:16:07.972 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" Namespace="calico-system" Pod="goldmane-768f4c5c69-wz64p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--wz64p-eth0" Jul 12 00:16:08.011661 containerd[1565]: time="2025-07-12T00:16:08.011041338Z" level=info msg="Container 19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:08.013508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154550107.mount: Deactivated successfully. Jul 12 00:16:08.026658 containerd[1565]: time="2025-07-12T00:16:08.026623605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:08.028342 containerd[1565]: time="2025-07-12T00:16:08.028320999Z" level=info msg="CreateContainer within sandbox \"9cfa13d42d577e1b9b51be8cb1e16c5bced70d1d6f54a9a096bc3c661df79eea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd\"" Jul 12 00:16:08.029390 containerd[1565]: time="2025-07-12T00:16:08.029359448Z" level=info msg="StartContainer for \"19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd\"" Jul 12 00:16:08.030084 containerd[1565]: time="2025-07-12T00:16:08.030027270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 12 00:16:08.030568 containerd[1565]: time="2025-07-12T00:16:08.030392416Z" level=info msg="connecting to shim 19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd" address="unix:///run/containerd/s/26c7bda8f2b0961f6b658813229314e09ee271e6c6d5315a8aa8056e9650c7c3" protocol=ttrpc version=3 Jul 12 00:16:08.031541 containerd[1565]: time="2025-07-12T00:16:08.031399294Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:08.042378 containerd[1565]: time="2025-07-12T00:16:08.042322177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:08.044029 containerd[1565]: time="2025-07-12T00:16:08.043945216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.034456989s" Jul 12 00:16:08.044029 containerd[1565]: time="2025-07-12T00:16:08.044006295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 00:16:08.046121 containerd[1565]: time="2025-07-12T00:16:08.046092200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:16:08.057007 containerd[1565]: time="2025-07-12T00:16:08.056908787Z" level=info msg="CreateContainer within sandbox \"ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:16:08.064372 systemd[1]: Started cri-containerd-19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd.scope - libcontainer container 19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd. Jul 12 00:16:08.065013 systemd-networkd[1476]: calid63aa9e2bce: Link UP Jul 12 00:16:08.070929 systemd-networkd[1476]: calid63aa9e2bce: Gained carrier Jul 12 00:16:08.074110 containerd[1565]: time="2025-07-12T00:16:08.073960055Z" level=info msg="connecting to shim 490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264" address="unix:///run/containerd/s/5086b51d90ab7d751f4b93f4d434faa8ff20f0cc4116e5bb4c93e21d84f3ba57" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:08.074814 containerd[1565]: time="2025-07-12T00:16:08.074788348Z" level=info msg="Container 4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:08.086258 containerd[1565]: time="2025-07-12T00:16:08.086109802Z" level=info msg="CreateContainer within sandbox \"ec111e1a1db5e18fa347074ce7cc401381e4e6b57568eac6a4a79281d0c36000\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d\"" Jul 12 00:16:08.089412 containerd[1565]: time="2025-07-12T00:16:08.089349639Z" level=info msg="StartContainer for \"4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d\"" Jul 12 00:16:08.091074 containerd[1565]: time="2025-07-12T00:16:08.090943183Z" level=info msg="connecting to shim 4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d" address="unix:///run/containerd/s/e04031e2cadd4757c39eaf08fa55c06a3160243f18c987be3f4979cfaf71d7d6" protocol=ttrpc version=3 Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.461 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0 calico-apiserver-8564bd9cc- calico-apiserver db7a4da8-0407-419f-8801-eebf7ffb5cf2 893 0 2025-07-12 00:15:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8564bd9cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8564bd9cc-hp9rg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid63aa9e2bce [] [] }} ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.462 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.514 [INFO][4580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" HandleID="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Workload="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.514 [INFO][4580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" HandleID="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Workload="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b6730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8564bd9cc-hp9rg", "timestamp":"2025-07-12 00:16:07.514295185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.514 [INFO][4580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.943 [INFO][4580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.945 [INFO][4580] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.973 [INFO][4580] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:07.990 [INFO][4580] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.015 [INFO][4580] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.017 [INFO][4580] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.022 [INFO][4580] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.022 [INFO][4580] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.025 [INFO][4580] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6 Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.031 [INFO][4580] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.039 [INFO][4580] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.039 [INFO][4580] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" host="localhost" Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.040 [INFO][4580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:08.105434 containerd[1565]: 2025-07-12 00:16:08.040 [INFO][4580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" HandleID="k8s-pod-network.2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Workload="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.056 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0", GenerateName:"calico-apiserver-8564bd9cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db7a4da8-0407-419f-8801-eebf7ffb5cf2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8564bd9cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8564bd9cc-hp9rg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid63aa9e2bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.056 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.056 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid63aa9e2bce ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.078 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.080 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0", GenerateName:"calico-apiserver-8564bd9cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"db7a4da8-0407-419f-8801-eebf7ffb5cf2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8564bd9cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6", Pod:"calico-apiserver-8564bd9cc-hp9rg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid63aa9e2bce", MAC:"f2:8e:b9:71:27:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:08.115696 containerd[1565]: 2025-07-12 00:16:08.095 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" Namespace="calico-apiserver" Pod="calico-apiserver-8564bd9cc-hp9rg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8564bd9cc--hp9rg-eth0" Jul 12 00:16:08.126266 systemd[1]: Started cri-containerd-490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264.scope - libcontainer container 490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264. Jul 12 00:16:08.132771 systemd[1]: Started cri-containerd-4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d.scope - libcontainer container 4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d. Jul 12 00:16:08.161476 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:08.319678 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:37340.service - OpenSSH per-connection server daemon (10.0.0.1:37340). Jul 12 00:16:08.471669 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 37340 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:08.473887 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:08.480014 systemd-logind[1543]: New session 11 of user core. Jul 12 00:16:08.487130 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:16:08.805255 sshd[4803]: Connection closed by 10.0.0.1 port 37340 Jul 12 00:16:08.805702 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:08.810999 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:37340.service: Deactivated successfully. Jul 12 00:16:08.813938 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:16:08.814765 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:16:08.816056 systemd-logind[1543]: Removed session 11. Jul 12 00:16:08.843924 containerd[1565]: time="2025-07-12T00:16:08.843681819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-wz64p,Uid:230d3ea2-c732-406e-9ef2-32f2ab376115,Namespace:calico-system,Attempt:0,} returns sandbox id \"490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264\"" Jul 12 00:16:08.843924 containerd[1565]: time="2025-07-12T00:16:08.843758197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcrj6,Uid:e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2,Namespace:calico-system,Attempt:0,}" Jul 12 00:16:08.843924 containerd[1565]: time="2025-07-12T00:16:08.843914128Z" level=info msg="StartContainer for \"19171e2a44d98e9b10f3ffc9a98b9f854633eb1ea5b30d05e66e67151e1610cd\" returns successfully" Jul 12 00:16:08.844557 containerd[1565]: time="2025-07-12T00:16:08.844533076Z" level=info msg="StartContainer for \"4e62401a64bcea2d4e5e1bad85ed63de1b7c8bf20b7d172838f053bcdb940d6d\" returns successfully" Jul 12 00:16:08.851367 kubelet[2762]: E0712 00:16:08.851329 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:08.851965 kubelet[2762]: E0712 00:16:08.851526 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:08.864962 kubelet[2762]: I0712 00:16:08.864842 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7z6fh" podStartSLOduration=41.864821377 podStartE2EDuration="41.864821377s" podCreationTimestamp="2025-07-12 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:08.864072388 +0000 UTC m=+48.587053020" watchObservedRunningTime="2025-07-12 00:16:08.864821377 +0000 UTC m=+48.587801979" Jul 12 00:16:08.900536 containerd[1565]: time="2025-07-12T00:16:08.900367427Z" level=info msg="connecting to shim 2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6" address="unix:///run/containerd/s/b3ec014304bcb86a861e9cf1248502717e0c033527406dd61d3376d787694f02" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:08.949518 systemd[1]: Started cri-containerd-2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6.scope - libcontainer container 2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6. Jul 12 00:16:08.963180 systemd-networkd[1476]: calib6a2df2c2e2: Gained IPv6LL Jul 12 00:16:08.984064 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:09.217023 containerd[1565]: time="2025-07-12T00:16:09.214309160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8564bd9cc-hp9rg,Uid:db7a4da8-0407-419f-8801-eebf7ffb5cf2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6\"" Jul 12 00:16:09.215106 systemd-networkd[1476]: cali75f993636e0: Gained IPv6LL Jul 12 00:16:09.219169 systemd-networkd[1476]: calid63aa9e2bce: Gained IPv6LL Jul 12 00:16:09.232028 systemd-networkd[1476]: cali1b6c1eb8a58: Link UP Jul 12 00:16:09.232915 systemd-networkd[1476]: cali1b6c1eb8a58: Gained carrier Jul 12 00:16:09.233675 containerd[1565]: time="2025-07-12T00:16:09.233634036Z" level=info msg="CreateContainer within sandbox \"2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:08.940 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tcrj6-eth0 csi-node-driver- calico-system e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2 769 0 2025-07-12 00:15:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tcrj6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b6c1eb8a58 [] [] }} ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:08.948 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.000 [INFO][4868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" HandleID="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Workload="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.001 [INFO][4868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" HandleID="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Workload="localhost-k8s-csi--node--driver--tcrj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tcrj6", "timestamp":"2025-07-12 00:16:09.000698145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.001 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.001 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.001 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.013 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.019 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.027 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.030 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.033 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.033 [INFO][4868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.035 [INFO][4868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.040 [INFO][4868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.212 [INFO][4868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.212 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" host="localhost" Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.212 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:09.252126 containerd[1565]: 2025-07-12 00:16:09.212 [INFO][4868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" HandleID="k8s-pod-network.e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Workload="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.224 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tcrj6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tcrj6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b6c1eb8a58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.224 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.224 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b6c1eb8a58 ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.235 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.235 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tcrj6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d", Pod:"csi-node-driver-tcrj6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b6c1eb8a58", MAC:"a6:33:35:14:9a:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:09.252952 containerd[1565]: 2025-07-12 00:16:09.247 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" Namespace="calico-system" Pod="csi-node-driver-tcrj6" WorkloadEndpoint="localhost-k8s-csi--node--driver--tcrj6-eth0" Jul 12 00:16:09.268409 containerd[1565]: time="2025-07-12T00:16:09.267197237Z" level=info msg="Container ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:09.276311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249708274.mount: Deactivated successfully. Jul 12 00:16:09.309613 containerd[1565]: time="2025-07-12T00:16:09.309533675Z" level=info msg="CreateContainer within sandbox \"2a6e1e9d8a4c1a30c835077570d54d7fa9e1b70a6a05dd04c6b4580939a9bcd6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907\"" Jul 12 00:16:09.310392 containerd[1565]: time="2025-07-12T00:16:09.310350795Z" level=info msg="StartContainer for \"ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907\"" Jul 12 00:16:09.314482 containerd[1565]: time="2025-07-12T00:16:09.314416905Z" level=info msg="connecting to shim ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907" address="unix:///run/containerd/s/b3ec014304bcb86a861e9cf1248502717e0c033527406dd61d3376d787694f02" protocol=ttrpc version=3 Jul 12 00:16:09.331278 containerd[1565]: time="2025-07-12T00:16:09.331221650Z" level=info msg="connecting to shim e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d" address="unix:///run/containerd/s/a0ee5ae6d72ad4a99559904eefc260bf3c3accac8355c4598c0c1ffc72286232" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:09.352431 systemd[1]: Started cri-containerd-ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907.scope - libcontainer container ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907. Jul 12 00:16:09.362178 systemd[1]: Started cri-containerd-e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d.scope - libcontainer container e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d. Jul 12 00:16:09.377663 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:09.529800 containerd[1565]: time="2025-07-12T00:16:09.529156892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tcrj6,Uid:e8b79a6e-adae-4c2c-97d6-d0ae42d1daf2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d\"" Jul 12 00:16:09.531582 containerd[1565]: time="2025-07-12T00:16:09.531548615Z" level=info msg="StartContainer for \"ff77eefff7e9a2da30ac2e7d65abb09444d7645b48898831c02e19615a788907\" returns successfully" Jul 12 00:16:09.863927 kubelet[2762]: E0712 00:16:09.863703 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:09.872467 kubelet[2762]: I0712 00:16:09.872397 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8564bd9cc-hp9rg" podStartSLOduration=31.872383552 podStartE2EDuration="31.872383552s" podCreationTimestamp="2025-07-12 00:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:09.87213905 +0000 UTC m=+49.595119652" watchObservedRunningTime="2025-07-12 00:16:09.872383552 +0000 UTC m=+49.595364155" Jul 12 00:16:09.886052 kubelet[2762]: I0712 00:16:09.884955 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8564bd9cc-frrtr" podStartSLOduration=28.848597184 podStartE2EDuration="31.884935526s" podCreationTimestamp="2025-07-12 00:15:38 +0000 UTC" firstStartedPulling="2025-07-12 00:16:05.008921077 +0000 UTC m=+44.731901679" lastFinishedPulling="2025-07-12 00:16:08.045259419 +0000 UTC m=+47.768240021" observedRunningTime="2025-07-12 00:16:09.884807568 +0000 UTC m=+49.607788191" watchObservedRunningTime="2025-07-12 00:16:09.884935526 +0000 UTC m=+49.607916128" Jul 12 00:16:10.814177 systemd-networkd[1476]: cali1b6c1eb8a58: Gained IPv6LL Jul 12 00:16:10.865539 kubelet[2762]: I0712 00:16:10.865483 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:16:10.866102 kubelet[2762]: E0712 00:16:10.865923 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:11.072133 containerd[1565]: time="2025-07-12T00:16:11.071988623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:11.079448 containerd[1565]: time="2025-07-12T00:16:11.079380289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 12 00:16:11.081286 containerd[1565]: time="2025-07-12T00:16:11.081090349Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:11.084528 containerd[1565]: time="2025-07-12T00:16:11.084447208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:11.085340 containerd[1565]: time="2025-07-12T00:16:11.085251750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.039124813s" Jul 12 00:16:11.085340 containerd[1565]: time="2025-07-12T00:16:11.085335763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 12 00:16:11.087441 containerd[1565]: time="2025-07-12T00:16:11.087395508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:16:11.093513 containerd[1565]: time="2025-07-12T00:16:11.093461445Z" level=info msg="CreateContainer within sandbox \"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:16:11.108081 containerd[1565]: time="2025-07-12T00:16:11.108029772Z" level=info msg="Container 99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:11.128091 containerd[1565]: time="2025-07-12T00:16:11.128040112Z" level=info msg="CreateContainer within sandbox \"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f\"" Jul 12 00:16:11.129142 containerd[1565]: time="2025-07-12T00:16:11.129106591Z" level=info msg="StartContainer for \"99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f\"" Jul 12 00:16:11.130526 containerd[1565]: time="2025-07-12T00:16:11.130464331Z" level=info msg="connecting to shim 99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f" address="unix:///run/containerd/s/e9921aced444ae38c2613971a8617db029bc699bf19efaf2cc6b179e1ec00b98" protocol=ttrpc version=3 Jul 12 00:16:11.167224 systemd[1]: Started cri-containerd-99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f.scope - libcontainer container 99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f. Jul 12 00:16:11.235716 containerd[1565]: time="2025-07-12T00:16:11.235671871Z" level=info msg="StartContainer for \"99f0c04ce50f7fea66ac65d8b221791b78201ff487c244fa9ceabfafd40e5a0f\" returns successfully" Jul 12 00:16:13.675320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856901874.mount: Deactivated successfully. Jul 12 00:16:13.822609 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:37346.service - OpenSSH per-connection server daemon (10.0.0.1:37346). Jul 12 00:16:13.908042 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 37346 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:13.908211 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:13.916113 systemd-logind[1543]: New session 12 of user core. Jul 12 00:16:13.922233 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:16:14.077304 sshd[5041]: Connection closed by 10.0.0.1 port 37346 Jul 12 00:16:14.077904 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:14.083019 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:37346.service: Deactivated successfully. Jul 12 00:16:14.085505 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:16:14.086448 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:16:14.088151 systemd-logind[1543]: Removed session 12. Jul 12 00:16:14.727548 containerd[1565]: time="2025-07-12T00:16:14.727475585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:14.728399 containerd[1565]: time="2025-07-12T00:16:14.728342043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 12 00:16:14.729681 containerd[1565]: time="2025-07-12T00:16:14.729636025Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:14.731707 containerd[1565]: time="2025-07-12T00:16:14.731661355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:14.732290 containerd[1565]: time="2025-07-12T00:16:14.732260870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.644813262s" Jul 12 00:16:14.732352 containerd[1565]: time="2025-07-12T00:16:14.732290937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 12 00:16:14.733040 containerd[1565]: time="2025-07-12T00:16:14.733019231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:16:14.739211 containerd[1565]: time="2025-07-12T00:16:14.739162951Z" level=info msg="CreateContainer within sandbox \"490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:16:14.755916 containerd[1565]: time="2025-07-12T00:16:14.755865467Z" level=info msg="Container ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:14.766055 containerd[1565]: time="2025-07-12T00:16:14.765998841Z" level=info msg="CreateContainer within sandbox \"490c70b70d1bda7841e6c33a93ba12ad687a9ced67bbbe70c776e99262edd264\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\"" Jul 12 00:16:14.766565 containerd[1565]: time="2025-07-12T00:16:14.766529312Z" level=info msg="StartContainer for \"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\"" Jul 12 00:16:14.767851 containerd[1565]: time="2025-07-12T00:16:14.767819186Z" level=info msg="connecting to shim ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b" address="unix:///run/containerd/s/5086b51d90ab7d751f4b93f4d434faa8ff20f0cc4116e5bb4c93e21d84f3ba57" protocol=ttrpc version=3 Jul 12 00:16:14.802297 systemd[1]: Started cri-containerd-ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b.scope - libcontainer container ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b. Jul 12 00:16:15.063437 containerd[1565]: time="2025-07-12T00:16:15.063182577Z" level=info msg="StartContainer for \"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\" returns successfully" Jul 12 00:16:16.092060 kubelet[2762]: I0712 00:16:16.091955 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-wz64p" podStartSLOduration=30.206039326 podStartE2EDuration="36.091933958s" podCreationTimestamp="2025-07-12 00:15:40 +0000 UTC" firstStartedPulling="2025-07-12 00:16:08.847004648 +0000 UTC m=+48.569985250" lastFinishedPulling="2025-07-12 00:16:14.73289928 +0000 UTC m=+54.455879882" observedRunningTime="2025-07-12 00:16:16.091557343 +0000 UTC m=+55.814537945" watchObservedRunningTime="2025-07-12 00:16:16.091933958 +0000 UTC m=+55.814914570" Jul 12 00:16:16.701732 containerd[1565]: time="2025-07-12T00:16:16.701671238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:16.702625 containerd[1565]: time="2025-07-12T00:16:16.702593401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 12 00:16:16.704019 containerd[1565]: time="2025-07-12T00:16:16.703923930Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:16.706274 containerd[1565]: time="2025-07-12T00:16:16.706213964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:16.706749 containerd[1565]: time="2025-07-12T00:16:16.706724646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.973685237s" Jul 12 00:16:16.706749 containerd[1565]: time="2025-07-12T00:16:16.706749624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 12 00:16:16.707936 containerd[1565]: time="2025-07-12T00:16:16.707895437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:16:16.713407 containerd[1565]: time="2025-07-12T00:16:16.713357392Z" level=info msg="CreateContainer within sandbox \"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:16:16.731169 containerd[1565]: time="2025-07-12T00:16:16.731114531Z" level=info msg="Container 99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:16.742705 containerd[1565]: time="2025-07-12T00:16:16.742650726Z" level=info msg="CreateContainer within sandbox \"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e\"" Jul 12 00:16:16.743632 containerd[1565]: time="2025-07-12T00:16:16.743602516Z" level=info msg="StartContainer for \"99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e\"" Jul 12 00:16:16.745216 containerd[1565]: time="2025-07-12T00:16:16.745192213Z" level=info msg="connecting to shim 99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e" address="unix:///run/containerd/s/a0ee5ae6d72ad4a99559904eefc260bf3c3accac8355c4598c0c1ffc72286232" protocol=ttrpc version=3 Jul 12 00:16:16.770236 systemd[1]: Started cri-containerd-99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e.scope - libcontainer container 99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e. Jul 12 00:16:16.818837 containerd[1565]: time="2025-07-12T00:16:16.818767929Z" level=info msg="StartContainer for \"99bd8bbfeb18f6fec22b8b5403edec3a619c05c28c87cd09b7592a12f5c4a60e\" returns successfully" Jul 12 00:16:17.161092 containerd[1565]: time="2025-07-12T00:16:17.161049557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\" id:\"3c70f6c0f17fce036ba081423920194e45b26666568d054ba01e5f63d7f6a6f1\" pid:5149 exited_at:{seconds:1752279377 nanos:160603620}" Jul 12 00:16:18.387006 containerd[1565]: time="2025-07-12T00:16:18.386588220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,}" Jul 12 00:16:18.540453 systemd-networkd[1476]: cali31ec993abeb: Link UP Jul 12 00:16:18.542471 systemd-networkd[1476]: cali31ec993abeb: Gained carrier Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.447 [INFO][5163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0 calico-kube-controllers-6c7c8d84b4- calico-system 8afc212d-432e-4273-981e-858c04dc7166 890 0 2025-07-12 00:15:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c7c8d84b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c7c8d84b4-n9rjr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali31ec993abeb [] [] }} ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.447 [INFO][5163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.483 [INFO][5177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" HandleID="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Workload="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.484 [INFO][5177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" HandleID="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Workload="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c7c8d84b4-n9rjr", "timestamp":"2025-07-12 00:16:18.483240988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.484 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.484 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.484 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.494 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.501 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.508 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.511 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.514 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.514 [INFO][5177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.517 [INFO][5177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692 Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.523 [INFO][5177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.533 [INFO][5177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.533 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" host="localhost" Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.533 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:16:18.570108 containerd[1565]: 2025-07-12 00:16:18.533 [INFO][5177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" HandleID="k8s-pod-network.15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Workload="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.537 [INFO][5163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0", GenerateName:"calico-kube-controllers-6c7c8d84b4-", Namespace:"calico-system", SelfLink:"", UID:"8afc212d-432e-4273-981e-858c04dc7166", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c8d84b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c7c8d84b4-n9rjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31ec993abeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.537 [INFO][5163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.537 [INFO][5163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31ec993abeb ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.542 [INFO][5163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.542 [INFO][5163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0", GenerateName:"calico-kube-controllers-6c7c8d84b4-", Namespace:"calico-system", SelfLink:"", UID:"8afc212d-432e-4273-981e-858c04dc7166", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 15, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c7c8d84b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692", Pod:"calico-kube-controllers-6c7c8d84b4-n9rjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31ec993abeb", MAC:"76:16:be:d3:b1:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:16:18.570858 containerd[1565]: 2025-07-12 00:16:18.559 [INFO][5163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" Namespace="calico-system" Pod="calico-kube-controllers-6c7c8d84b4-n9rjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c7c8d84b4--n9rjr-eth0" Jul 12 00:16:19.095362 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:60346.service - OpenSSH per-connection server daemon (10.0.0.1:60346). Jul 12 00:16:19.176933 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 60346 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:19.178714 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:19.183509 systemd-logind[1543]: New session 13 of user core. Jul 12 00:16:19.194159 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:16:19.599664 sshd[5200]: Connection closed by 10.0.0.1 port 60346 Jul 12 00:16:19.600184 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:19.616134 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:60346.service: Deactivated successfully. Jul 12 00:16:19.618352 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:16:19.619154 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:16:19.622298 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:60356.service - OpenSSH per-connection server daemon (10.0.0.1:60356). Jul 12 00:16:19.623762 systemd-logind[1543]: Removed session 13. Jul 12 00:16:19.675394 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 60356 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:19.676776 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:19.681380 systemd-logind[1543]: New session 14 of user core. Jul 12 00:16:19.690127 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:16:20.094176 systemd-networkd[1476]: cali31ec993abeb: Gained IPv6LL Jul 12 00:16:20.157172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356799356.mount: Deactivated successfully. Jul 12 00:16:20.391205 sshd[5216]: Connection closed by 10.0.0.1 port 60356 Jul 12 00:16:20.297942 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:20.308781 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:60356.service: Deactivated successfully. Jul 12 00:16:20.380871 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:20.391901 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 60372 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:20.310628 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:16:20.311465 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:16:20.314228 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:60372.service - OpenSSH per-connection server daemon (10.0.0.1:60372). Jul 12 00:16:20.314848 systemd-logind[1543]: Removed session 14. Jul 12 00:16:20.387353 systemd-logind[1543]: New session 15 of user core. Jul 12 00:16:20.393117 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:16:21.697525 sshd[5237]: Connection closed by 10.0.0.1 port 60372 Jul 12 00:16:21.698001 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:21.703205 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:60372.service: Deactivated successfully. Jul 12 00:16:21.705443 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:16:21.708402 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:16:21.709494 systemd-logind[1543]: Removed session 15. Jul 12 00:16:21.845821 containerd[1565]: time="2025-07-12T00:16:21.845743935Z" level=info msg="connecting to shim 15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692" address="unix:///run/containerd/s/36132100afd0e6043ba6399c75d9606d41b92fd79f7790e8c71c61274fbd3dac" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:21.882318 systemd[1]: Started cri-containerd-15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692.scope - libcontainer container 15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692. Jul 12 00:16:21.896632 systemd-resolved[1422]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:16:21.979318 containerd[1565]: time="2025-07-12T00:16:21.979186188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c7c8d84b4-n9rjr,Uid:8afc212d-432e-4273-981e-858c04dc7166,Namespace:calico-system,Attempt:0,} returns sandbox id \"15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692\"" Jul 12 00:16:22.411459 containerd[1565]: time="2025-07-12T00:16:22.411365359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:22.412256 containerd[1565]: time="2025-07-12T00:16:22.412201110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 12 00:16:22.413484 containerd[1565]: time="2025-07-12T00:16:22.413436568Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:22.416493 containerd[1565]: time="2025-07-12T00:16:22.416396489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:22.417626 containerd[1565]: time="2025-07-12T00:16:22.417310981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.709374165s" Jul 12 00:16:22.417626 containerd[1565]: time="2025-07-12T00:16:22.417369594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 12 00:16:22.420002 containerd[1565]: time="2025-07-12T00:16:22.419931994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:16:22.476551 containerd[1565]: time="2025-07-12T00:16:22.476486594Z" level=info msg="CreateContainer within sandbox \"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:16:22.613324 containerd[1565]: time="2025-07-12T00:16:22.613227936Z" level=info msg="Container e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:22.624834 containerd[1565]: time="2025-07-12T00:16:22.624764573Z" level=info msg="CreateContainer within sandbox \"ace974aede43c9ce5a92cc919a0f23ac8898c6233e884a8c8e40dd93635786ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df\"" Jul 12 00:16:22.625537 containerd[1565]: time="2025-07-12T00:16:22.625477728Z" level=info msg="StartContainer for \"e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df\"" Jul 12 00:16:22.627212 containerd[1565]: time="2025-07-12T00:16:22.627172397Z" level=info msg="connecting to shim e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df" address="unix:///run/containerd/s/e9921aced444ae38c2613971a8617db029bc699bf19efaf2cc6b179e1ec00b98" protocol=ttrpc version=3 Jul 12 00:16:22.653294 systemd[1]: Started cri-containerd-e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df.scope - libcontainer container e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df. Jul 12 00:16:22.731125 containerd[1565]: time="2025-07-12T00:16:22.730923664Z" level=info msg="StartContainer for \"e7e21321e5947e7cceaf7522c5fcbc16754e8b188e8ce7a0998f46c8aa1597df\" returns successfully" Jul 12 00:16:23.106738 kubelet[2762]: I0712 00:16:23.106548 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-864ffdf774-qlv67" podStartSLOduration=2.168396049 podStartE2EDuration="19.106532012s" podCreationTimestamp="2025-07-12 00:16:04 +0000 UTC" firstStartedPulling="2025-07-12 00:16:05.480893611 +0000 UTC m=+45.203874214" lastFinishedPulling="2025-07-12 00:16:22.419029575 +0000 UTC m=+62.142010177" observedRunningTime="2025-07-12 00:16:23.106214593 +0000 UTC m=+62.829195205" watchObservedRunningTime="2025-07-12 00:16:23.106532012 +0000 UTC m=+62.829512614" Jul 12 00:16:24.588461 containerd[1565]: time="2025-07-12T00:16:24.588378373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:24.601162 containerd[1565]: time="2025-07-12T00:16:24.589236897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 12 00:16:24.601162 containerd[1565]: time="2025-07-12T00:16:24.590494224Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:24.601263 containerd[1565]: time="2025-07-12T00:16:24.593737691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.173757676s" Jul 12 00:16:24.601321 containerd[1565]: time="2025-07-12T00:16:24.601261683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 12 00:16:24.601863 containerd[1565]: time="2025-07-12T00:16:24.601824341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:24.602791 containerd[1565]: time="2025-07-12T00:16:24.602625886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:16:24.607763 containerd[1565]: time="2025-07-12T00:16:24.607330410Z" level=info msg="CreateContainer within sandbox \"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:16:24.620683 containerd[1565]: time="2025-07-12T00:16:24.620634066Z" level=info msg="Container 33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:24.631965 containerd[1565]: time="2025-07-12T00:16:24.631935759Z" level=info msg="CreateContainer within sandbox \"e6f655ad4b428ac4721cdc2bce32eaa23d2be82ea47822485c57f97b7c9ffa2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1\"" Jul 12 00:16:24.632994 containerd[1565]: time="2025-07-12T00:16:24.632404707Z" level=info msg="StartContainer for \"33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1\"" Jul 12 00:16:24.633757 containerd[1565]: time="2025-07-12T00:16:24.633736797Z" level=info msg="connecting to shim 33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1" address="unix:///run/containerd/s/a0ee5ae6d72ad4a99559904eefc260bf3c3accac8355c4598c0c1ffc72286232" protocol=ttrpc version=3 Jul 12 00:16:24.661237 systemd[1]: Started cri-containerd-33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1.scope - libcontainer container 33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1. Jul 12 00:16:24.708940 containerd[1565]: time="2025-07-12T00:16:24.708881535Z" level=info msg="StartContainer for \"33f7c191741ed4942eecd6632380667574612d2c01cc1eb9d5f537cc2177e4a1\" returns successfully" Jul 12 00:16:25.204598 kubelet[2762]: I0712 00:16:25.204522 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tcrj6" podStartSLOduration=29.134839589 podStartE2EDuration="44.204504047s" podCreationTimestamp="2025-07-12 00:15:41 +0000 UTC" firstStartedPulling="2025-07-12 00:16:09.532473473 +0000 UTC m=+49.255454075" lastFinishedPulling="2025-07-12 00:16:24.602137931 +0000 UTC m=+64.325118533" observedRunningTime="2025-07-12 00:16:25.204456176 +0000 UTC m=+64.927436788" watchObservedRunningTime="2025-07-12 00:16:25.204504047 +0000 UTC m=+64.927484649" Jul 12 00:16:25.467811 kubelet[2762]: I0712 00:16:25.467696 2762 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:16:25.474282 kubelet[2762]: I0712 00:16:25.474243 2762 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:16:26.717628 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:56162.service - OpenSSH per-connection server daemon (10.0.0.1:56162). Jul 12 00:16:26.788660 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 56162 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:26.790851 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:26.796092 systemd-logind[1543]: New session 16 of user core. Jul 12 00:16:26.807325 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:16:26.963020 sshd[5389]: Connection closed by 10.0.0.1 port 56162 Jul 12 00:16:26.963414 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:26.969298 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:56162.service: Deactivated successfully. Jul 12 00:16:26.972291 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:16:26.974388 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:16:26.977933 systemd-logind[1543]: Removed session 16. Jul 12 00:16:29.900001 containerd[1565]: time="2025-07-12T00:16:29.899903870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:29.901336 containerd[1565]: time="2025-07-12T00:16:29.901103882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 12 00:16:29.903601 containerd[1565]: time="2025-07-12T00:16:29.903562657Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:29.906454 containerd[1565]: time="2025-07-12T00:16:29.906408372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:29.907017 containerd[1565]: time="2025-07-12T00:16:29.906961578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.304291628s" Jul 12 00:16:29.907059 containerd[1565]: time="2025-07-12T00:16:29.907021452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 12 00:16:29.922675 containerd[1565]: time="2025-07-12T00:16:29.922622624Z" level=info msg="CreateContainer within sandbox \"15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:16:29.933880 containerd[1565]: time="2025-07-12T00:16:29.933825347Z" level=info msg="Container 93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:29.943650 containerd[1565]: time="2025-07-12T00:16:29.943600163Z" level=info msg="CreateContainer within sandbox \"15fa496ec929d02a67aedff075637bb573600d39b1bc2fd9b1455331c3b19692\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee\"" Jul 12 00:16:29.944415 containerd[1565]: time="2025-07-12T00:16:29.944380503Z" level=info msg="StartContainer for \"93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee\"" Jul 12 00:16:29.945483 containerd[1565]: time="2025-07-12T00:16:29.945459123Z" level=info msg="connecting to shim 93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee" address="unix:///run/containerd/s/36132100afd0e6043ba6399c75d9606d41b92fd79f7790e8c71c61274fbd3dac" protocol=ttrpc version=3 Jul 12 00:16:29.979314 systemd[1]: Started cri-containerd-93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee.scope - libcontainer container 93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee. Jul 12 00:16:30.031558 containerd[1565]: time="2025-07-12T00:16:30.031513418Z" level=info msg="StartContainer for \"93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee\" returns successfully" Jul 12 00:16:30.143576 kubelet[2762]: I0712 00:16:30.143502 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c7c8d84b4-n9rjr" podStartSLOduration=41.216621491 podStartE2EDuration="49.143484436s" podCreationTimestamp="2025-07-12 00:15:41 +0000 UTC" firstStartedPulling="2025-07-12 00:16:21.980811434 +0000 UTC m=+61.703792036" lastFinishedPulling="2025-07-12 00:16:29.907674379 +0000 UTC m=+69.630654981" observedRunningTime="2025-07-12 00:16:30.142179655 +0000 UTC m=+69.865160267" watchObservedRunningTime="2025-07-12 00:16:30.143484436 +0000 UTC m=+69.866465038" Jul 12 00:16:30.270231 containerd[1565]: time="2025-07-12T00:16:30.270155443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee\" id:\"e60cc3d0abb5cd47488fe29b186cd46defa91d2d767e016163eafd6bb0696608\" pid:5467 exited_at:{seconds:1752279390 nanos:182211858}" Jul 12 00:16:31.985417 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:56176.service - OpenSSH per-connection server daemon (10.0.0.1:56176). Jul 12 00:16:32.042875 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 56176 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:32.044988 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:32.051559 systemd-logind[1543]: New session 17 of user core. Jul 12 00:16:32.062345 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:16:32.215585 sshd[5481]: Connection closed by 10.0.0.1 port 56176 Jul 12 00:16:32.215963 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:32.222157 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:56176.service: Deactivated successfully. Jul 12 00:16:32.225113 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:16:32.229563 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:16:32.230861 systemd-logind[1543]: Removed session 17. Jul 12 00:16:36.565587 containerd[1565]: time="2025-07-12T00:16:36.565537636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\" id:\"84551403d776cb584f64fa923ed17d9210cca07b4bc521be4721acba55fd2cd8\" pid:5506 exited_at:{seconds:1752279396 nanos:565229358}" Jul 12 00:16:37.239649 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962). Jul 12 00:16:37.295673 sshd[5520]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:37.298849 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:37.304254 systemd-logind[1543]: New session 18 of user core. Jul 12 00:16:37.313151 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:16:37.459030 sshd[5522]: Connection closed by 10.0.0.1 port 46962 Jul 12 00:16:37.459326 sshd-session[5520]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:37.463683 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:46962.service: Deactivated successfully. Jul 12 00:16:37.466200 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:16:37.467117 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:16:37.468801 systemd-logind[1543]: Removed session 18. Jul 12 00:16:39.385306 kubelet[2762]: E0712 00:16:39.385246 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:42.477894 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:46968.service - OpenSSH per-connection server daemon (10.0.0.1:46968). Jul 12 00:16:42.526041 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 46968 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:42.527676 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:42.532254 systemd-logind[1543]: New session 19 of user core. Jul 12 00:16:42.538088 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:16:42.726741 sshd[5539]: Connection closed by 10.0.0.1 port 46968 Jul 12 00:16:42.727776 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:42.736313 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:46968.service: Deactivated successfully. Jul 12 00:16:42.739311 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:16:42.740504 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:16:42.742853 systemd-logind[1543]: Removed session 19. Jul 12 00:16:47.379382 containerd[1565]: time="2025-07-12T00:16:47.379327247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\" id:\"c4eaf75d4603794a5ce0dac22d5c490c26090761576924d648f72214a0272eb1\" pid:5593 exited_at:{seconds:1752279407 nanos:377441198}" Jul 12 00:16:47.396492 containerd[1565]: time="2025-07-12T00:16:47.380772721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ce7eb68684787f6b458bcedba90fd01eaa17f028a0cd465383635d28a9ada22b\" id:\"8874bf4605df33f1f18af806a73d87267895113adc27fc378404d7d318f5b68f\" pid:5570 exited_at:{seconds:1752279407 nanos:380499593}" Jul 12 00:16:47.745532 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:45164.service - OpenSSH per-connection server daemon (10.0.0.1:45164). Jul 12 00:16:47.820083 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 45164 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:47.822458 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:47.828299 systemd-logind[1543]: New session 20 of user core. Jul 12 00:16:47.838357 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:16:48.409569 sshd[5608]: Connection closed by 10.0.0.1 port 45164 Jul 12 00:16:48.410168 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:48.424678 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:45164.service: Deactivated successfully. Jul 12 00:16:48.427870 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:16:48.428853 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:16:48.432621 systemd-logind[1543]: Removed session 20. Jul 12 00:16:48.434586 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:45174.service - OpenSSH per-connection server daemon (10.0.0.1:45174). Jul 12 00:16:48.492165 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 45174 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:48.494244 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:48.500697 systemd-logind[1543]: New session 21 of user core. Jul 12 00:16:48.508196 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:16:48.970171 sshd[5623]: Connection closed by 10.0.0.1 port 45174 Jul 12 00:16:48.970614 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:48.988421 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:45174.service: Deactivated successfully. Jul 12 00:16:48.991083 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:16:48.992027 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:16:48.995643 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:45180.service - OpenSSH per-connection server daemon (10.0.0.1:45180). Jul 12 00:16:48.996377 systemd-logind[1543]: Removed session 21. Jul 12 00:16:49.065233 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 45180 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:49.067551 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:49.073334 systemd-logind[1543]: New session 22 of user core. Jul 12 00:16:49.083219 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:16:50.176530 sshd[5636]: Connection closed by 10.0.0.1 port 45180 Jul 12 00:16:50.177632 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:50.190418 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:45180.service: Deactivated successfully. Jul 12 00:16:50.193185 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:16:50.195462 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:16:50.198144 systemd-logind[1543]: Removed session 22. Jul 12 00:16:50.201075 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:45188.service - OpenSSH per-connection server daemon (10.0.0.1:45188). Jul 12 00:16:50.256090 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 45188 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:50.257807 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:50.263037 systemd-logind[1543]: New session 23 of user core. Jul 12 00:16:50.277162 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:16:50.703428 sshd[5670]: Connection closed by 10.0.0.1 port 45188 Jul 12 00:16:50.703762 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:50.715809 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:45188.service: Deactivated successfully. Jul 12 00:16:50.718959 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:16:50.720365 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:16:50.725346 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:45192.service - OpenSSH per-connection server daemon (10.0.0.1:45192). Jul 12 00:16:50.727051 systemd-logind[1543]: Removed session 23. Jul 12 00:16:50.776331 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:50.778392 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:50.783307 systemd-logind[1543]: New session 24 of user core. Jul 12 00:16:50.795167 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:16:50.938716 sshd[5683]: Connection closed by 10.0.0.1 port 45192 Jul 12 00:16:50.939128 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:50.944636 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:45192.service: Deactivated successfully. Jul 12 00:16:50.946952 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:16:50.948237 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:16:50.950365 systemd-logind[1543]: Removed session 24. Jul 12 00:16:51.386112 kubelet[2762]: E0712 00:16:51.386062 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.385931 kubelet[2762]: E0712 00:16:52.385827 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.554062 kubelet[2762]: I0712 00:16:52.554011 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:16:55.385557 kubelet[2762]: E0712 00:16:55.385488 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.955406 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:50026.service - OpenSSH per-connection server daemon (10.0.0.1:50026). Jul 12 00:16:56.018652 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 50026 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:16:56.020740 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:56.025910 systemd-logind[1543]: New session 25 of user core. Jul 12 00:16:56.034263 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:16:56.161415 sshd[5700]: Connection closed by 10.0.0.1 port 50026 Jul 12 00:16:56.161807 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:56.166932 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:50026.service: Deactivated successfully. Jul 12 00:16:56.169289 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:16:56.170311 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:16:56.171731 systemd-logind[1543]: Removed session 25. Jul 12 00:17:00.179945 containerd[1565]: time="2025-07-12T00:17:00.179886066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93c2741b3e8cd360fab31850f4b0f09f49fdec01b64ec275995aab6ddeaa6dee\" id:\"6b2e8e6c57bf7593dc88a9b1401f76d3ef5e1d0a9a8355086fc956c651bb48a1\" pid:5729 exited_at:{seconds:1752279420 nanos:179631333}" Jul 12 00:17:01.177633 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Jul 12 00:17:01.235127 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:17:01.237734 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:01.243180 systemd-logind[1543]: New session 26 of user core. Jul 12 00:17:01.251173 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:17:01.470336 sshd[5743]: Connection closed by 10.0.0.1 port 50028 Jul 12 00:17:01.471458 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:01.479534 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:50028.service: Deactivated successfully. Jul 12 00:17:01.482199 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:17:01.483915 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:17:01.485607 systemd-logind[1543]: Removed session 26. Jul 12 00:17:06.488371 systemd[1]: Started sshd@26-10.0.0.79:22-10.0.0.1:44422.service - OpenSSH per-connection server daemon (10.0.0.1:44422). Jul 12 00:17:06.547994 containerd[1565]: time="2025-07-12T00:17:06.547918976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ce9f601243f67ae137fade204eb4658decd2302bb05a63227d1a1ee0d5dbc97\" id:\"6e84495a67b936400a1cc4f34bf30961b0a688b673ddd478f783899016d414c4\" pid:5771 exited_at:{seconds:1752279426 nanos:547470768}" Jul 12 00:17:06.552226 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 44422 ssh2: RSA SHA256:HI1YtIa6Tdc7kkeTuB5WOELPzak63J8vFA5jPsFa0ZA Jul 12 00:17:06.553945 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:06.560899 systemd-logind[1543]: New session 27 of user core. Jul 12 00:17:06.569140 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:17:06.695473 sshd[5786]: Connection closed by 10.0.0.1 port 44422 Jul 12 00:17:06.695943 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:06.702159 systemd[1]: sshd@26-10.0.0.79:22-10.0.0.1:44422.service: Deactivated successfully. Jul 12 00:17:06.705379 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:17:06.706370 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:17:06.708800 systemd-logind[1543]: Removed session 27.