Jul 15 05:07:08.871887 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:07:08.871910 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:08.871919 kernel: BIOS-provided physical RAM map: Jul 15 05:07:08.871925 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 05:07:08.871931 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 05:07:08.871938 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 05:07:08.871945 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 05:07:08.871954 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 05:07:08.871963 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 05:07:08.871970 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 05:07:08.871976 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:07:08.871982 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 05:07:08.871988 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:07:08.871995 kernel: NX (Execute Disable) protection: active Jul 15 05:07:08.872010 kernel: APIC: Static calls initialized Jul 15 05:07:08.872017 kernel: SMBIOS 2.8 present. Jul 15 05:07:08.872027 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 05:07:08.872034 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:07:08.872040 kernel: Hypervisor detected: KVM Jul 15 05:07:08.872047 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:07:08.872054 kernel: kvm-clock: using sched offset of 4300848365 cycles Jul 15 05:07:08.872061 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:07:08.872068 kernel: tsc: Detected 2794.750 MHz processor Jul 15 05:07:08.872077 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:07:08.872084 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:07:08.872099 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 05:07:08.872106 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 05:07:08.872113 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:07:08.872120 kernel: Using GB pages for direct mapping Jul 15 05:07:08.872127 kernel: ACPI: Early table checksum verification disabled Jul 15 05:07:08.872134 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 05:07:08.872141 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872151 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872158 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872165 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 05:07:08.872172 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872179 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872186 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872193 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:07:08.872200 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 05:07:08.872211 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 05:07:08.872219 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 05:07:08.872237 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 05:07:08.872245 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 05:07:08.872252 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 05:07:08.872259 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 05:07:08.872269 kernel: No NUMA configuration found Jul 15 05:07:08.872276 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 05:07:08.872283 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 05:07:08.872291 kernel: Zone ranges: Jul 15 05:07:08.872298 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:07:08.872305 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 05:07:08.872312 kernel: Normal empty Jul 15 05:07:08.872319 kernel: Device empty Jul 15 05:07:08.872326 kernel: Movable zone start for each node Jul 15 05:07:08.872333 kernel: Early memory node ranges Jul 15 05:07:08.872344 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 05:07:08.872351 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 05:07:08.872358 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 05:07:08.872365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:07:08.872372 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:07:08.872382 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 05:07:08.872389 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:07:08.872398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:07:08.872406 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:07:08.872415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:07:08.872422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:07:08.872431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:07:08.872439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:07:08.872446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:07:08.872453 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:07:08.872460 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:07:08.872467 kernel: TSC deadline timer available Jul 15 05:07:08.872474 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:07:08.872484 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:07:08.872491 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:07:08.872498 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:07:08.872505 kernel: CPU topo: Num. cores per package: 4 Jul 15 05:07:08.872512 kernel: CPU topo: Num. threads per package: 4 Jul 15 05:07:08.872519 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 05:07:08.872527 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:07:08.872534 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:07:08.872541 kernel: kvm-guest: setup PV sched yield Jul 15 05:07:08.872548 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 05:07:08.872557 kernel: Booting paravirtualized kernel on KVM Jul 15 05:07:08.872565 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:07:08.872572 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 05:07:08.872579 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 05:07:08.872587 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 05:07:08.872594 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 05:07:08.872601 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:07:08.872608 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:07:08.872616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:08.872626 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:07:08.872633 kernel: random: crng init done Jul 15 05:07:08.872640 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:07:08.872648 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:07:08.872664 kernel: Fallback order for Node 0: 0 Jul 15 05:07:08.872672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 05:07:08.872687 kernel: Policy zone: DMA32 Jul 15 05:07:08.872700 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:07:08.872711 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 05:07:08.872718 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:07:08.872726 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:07:08.872733 kernel: Dynamic Preempt: voluntary Jul 15 05:07:08.872740 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:07:08.872748 kernel: rcu: RCU event tracing is enabled. Jul 15 05:07:08.872755 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 05:07:08.872763 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:07:08.872777 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:07:08.872786 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:07:08.872794 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:07:08.872801 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 05:07:08.872808 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:08.872816 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:08.872823 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:07:08.872830 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 05:07:08.872838 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:07:08.872855 kernel: Console: colour VGA+ 80x25 Jul 15 05:07:08.872862 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:07:08.872870 kernel: ACPI: Core revision 20240827 Jul 15 05:07:08.872877 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:07:08.872887 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:07:08.872894 kernel: x2apic enabled Jul 15 05:07:08.872902 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:07:08.872912 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:07:08.872920 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:07:08.872929 kernel: kvm-guest: setup PV IPIs Jul 15 05:07:08.872937 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:07:08.872944 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:07:08.872952 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 05:07:08.872959 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:07:08.872966 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:07:08.872974 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:07:08.872981 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:07:08.872991 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:07:08.872998 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:07:08.873006 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 05:07:08.873013 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 05:07:08.873020 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:07:08.873028 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:07:08.873035 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:07:08.873043 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:07:08.873051 kernel: x86/bugs: return thunk changed Jul 15 05:07:08.873060 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:07:08.873068 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:07:08.873075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:07:08.873082 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:07:08.873098 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:07:08.873106 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 05:07:08.873113 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:07:08.873121 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:07:08.873129 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:07:08.873138 kernel: landlock: Up and running. Jul 15 05:07:08.873146 kernel: SELinux: Initializing. Jul 15 05:07:08.873153 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:07:08.873164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:07:08.873171 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 05:07:08.873179 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:07:08.873186 kernel: ... version: 0 Jul 15 05:07:08.873193 kernel: ... bit width: 48 Jul 15 05:07:08.873201 kernel: ... generic registers: 6 Jul 15 05:07:08.873211 kernel: ... value mask: 0000ffffffffffff Jul 15 05:07:08.873218 kernel: ... max period: 00007fffffffffff Jul 15 05:07:08.873260 kernel: ... fixed-purpose events: 0 Jul 15 05:07:08.873268 kernel: ... event mask: 000000000000003f Jul 15 05:07:08.873275 kernel: signal: max sigframe size: 1776 Jul 15 05:07:08.873282 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:07:08.873290 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:07:08.873297 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:07:08.873305 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:07:08.873316 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:07:08.873323 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 05:07:08.873331 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 05:07:08.873338 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 05:07:08.873346 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 136908K reserved, 0K cma-reserved) Jul 15 05:07:08.873353 kernel: devtmpfs: initialized Jul 15 05:07:08.873361 kernel: x86/mm: Memory block size: 128MB Jul 15 05:07:08.873368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:07:08.873376 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 05:07:08.873385 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:07:08.873393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:07:08.873403 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:07:08.873410 kernel: audit: type=2000 audit(1752556024.978:1): state=initialized audit_enabled=0 res=1 Jul 15 05:07:08.873418 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:07:08.873425 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:07:08.873433 kernel: cpuidle: using governor menu Jul 15 05:07:08.873440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:07:08.873448 kernel: dca service started, version 1.12.1 Jul 15 05:07:08.873458 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 05:07:08.873465 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 05:07:08.873472 kernel: PCI: Using configuration type 1 for base access Jul 15 05:07:08.873480 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:07:08.873487 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:07:08.873495 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:07:08.873503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:07:08.873510 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:07:08.873518 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:07:08.873527 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:07:08.873535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:07:08.873542 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:07:08.873549 kernel: ACPI: Interpreter enabled Jul 15 05:07:08.873557 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 05:07:08.873564 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:07:08.873572 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:07:08.873579 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:07:08.873587 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:07:08.873596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:07:08.873802 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:07:08.873924 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:07:08.874040 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:07:08.874050 kernel: PCI host bridge to bus 0000:00 Jul 15 05:07:08.874189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:07:08.874316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:07:08.874421 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:07:08.874525 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 05:07:08.874631 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 05:07:08.874734 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 05:07:08.874838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:07:08.874992 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:07:08.875143 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:07:08.875299 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 05:07:08.875417 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 05:07:08.875530 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 05:07:08.875645 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:07:08.875776 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 05:07:08.875892 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 05:07:08.876012 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 05:07:08.876137 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 05:07:08.876285 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:07:08.876404 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 05:07:08.876520 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 05:07:08.876634 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 05:07:08.876769 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:07:08.876894 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 05:07:08.877015 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 05:07:08.877141 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 05:07:08.877274 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 05:07:08.877407 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:07:08.877524 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:07:08.877658 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:07:08.877773 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 05:07:08.877893 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 05:07:08.878027 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:07:08.878154 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 05:07:08.878165 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:07:08.878173 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:07:08.878184 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:07:08.878191 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:07:08.878199 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:07:08.878206 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:07:08.878216 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:07:08.878236 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:07:08.878244 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:07:08.878252 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:07:08.878259 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:07:08.878270 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:07:08.878277 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:07:08.878285 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:07:08.878292 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:07:08.878299 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:07:08.878307 kernel: iommu: Default domain type: Translated Jul 15 05:07:08.878315 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:07:08.878322 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:07:08.878330 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:07:08.878339 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 05:07:08.878346 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 05:07:08.878468 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:07:08.878584 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:07:08.878699 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:07:08.878709 kernel: vgaarb: loaded Jul 15 05:07:08.878717 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:07:08.878725 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:07:08.878732 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:07:08.878743 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:07:08.878750 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:07:08.878758 kernel: pnp: PnP ACPI init Jul 15 05:07:08.878901 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 05:07:08.878912 kernel: pnp: PnP ACPI: found 6 devices Jul 15 05:07:08.878920 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:07:08.878928 kernel: NET: Registered PF_INET protocol family Jul 15 05:07:08.878935 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:07:08.878946 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:07:08.878953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:07:08.878961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:07:08.878968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:07:08.878976 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:07:08.878983 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:07:08.878991 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:07:08.878998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:07:08.879007 kernel: NET: Registered PF_XDP protocol family Jul 15 05:07:08.879126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:07:08.879267 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:07:08.879386 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:07:08.879506 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 05:07:08.879613 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 05:07:08.879718 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 05:07:08.879728 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:07:08.879737 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:07:08.879748 kernel: Initialise system trusted keyrings Jul 15 05:07:08.879758 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:07:08.879767 kernel: Key type asymmetric registered Jul 15 05:07:08.879776 kernel: Asymmetric key parser 'x509' registered Jul 15 05:07:08.879786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:07:08.879796 kernel: io scheduler mq-deadline registered Jul 15 05:07:08.879804 kernel: io scheduler kyber registered Jul 15 05:07:08.879812 kernel: io scheduler bfq registered Jul 15 05:07:08.879819 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:07:08.879830 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:07:08.879837 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:07:08.879845 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 05:07:08.879853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:07:08.879860 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:07:08.879868 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:07:08.879875 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:07:08.879883 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:07:08.880021 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 05:07:08.880150 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 05:07:08.880276 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T05:07:08 UTC (1752556028) Jul 15 05:07:08.880396 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 05:07:08.880411 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:07:08.880418 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 15 05:07:08.880426 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:07:08.880433 kernel: Segment Routing with IPv6 Jul 15 05:07:08.880441 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:07:08.880453 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:07:08.880461 kernel: Key type dns_resolver registered Jul 15 05:07:08.880468 kernel: IPI shorthand broadcast: enabled Jul 15 05:07:08.880476 kernel: sched_clock: Marking stable (3763003173, 109750866)->(3892886091, -20132052) Jul 15 05:07:08.880484 kernel: registered taskstats version 1 Jul 15 05:07:08.880491 kernel: Loading compiled-in X.509 certificates Jul 15 05:07:08.880499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:07:08.880506 kernel: Demotion targets for Node 0: null Jul 15 05:07:08.880514 kernel: Key type .fscrypt registered Jul 15 05:07:08.880523 kernel: Key type fscrypt-provisioning registered Jul 15 05:07:08.880531 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:07:08.880538 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:07:08.880546 kernel: ima: No architecture policies found Jul 15 05:07:08.880553 kernel: clk: Disabling unused clocks Jul 15 05:07:08.880561 kernel: Warning: unable to open an initial console. Jul 15 05:07:08.880568 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:07:08.880578 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:07:08.880590 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:07:08.880597 kernel: Run /init as init process Jul 15 05:07:08.880605 kernel: with arguments: Jul 15 05:07:08.880612 kernel: /init Jul 15 05:07:08.880620 kernel: with environment: Jul 15 05:07:08.880629 kernel: HOME=/ Jul 15 05:07:08.880637 kernel: TERM=linux Jul 15 05:07:08.880646 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:07:08.880655 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:07:08.880668 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:07:08.880688 systemd[1]: Detected virtualization kvm. Jul 15 05:07:08.880697 systemd[1]: Detected architecture x86-64. Jul 15 05:07:08.880705 systemd[1]: Running in initrd. Jul 15 05:07:08.880714 systemd[1]: No hostname configured, using default hostname. Jul 15 05:07:08.880724 systemd[1]: Hostname set to . Jul 15 05:07:08.880732 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:07:08.880740 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:07:08.880748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:08.880756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:08.880768 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:07:08.880778 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:07:08.880786 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:07:08.880797 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:07:08.880809 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:07:08.880820 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:07:08.880830 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:08.880838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:08.880846 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:07:08.880854 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:07:08.880864 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:07:08.880873 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:07:08.880881 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:07:08.880889 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:07:08.880898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:07:08.880906 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:07:08.880915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:08.880925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:08.880934 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:08.880945 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:07:08.880954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:07:08.880962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:07:08.880970 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:07:08.880979 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:07:08.880991 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:07:08.881002 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:07:08.881012 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:07:08.881020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:08.881029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:07:08.881041 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:08.881052 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:07:08.881060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:07:08.881105 systemd-journald[219]: Collecting audit messages is disabled. Jul 15 05:07:08.881129 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:07:08.881137 systemd-journald[219]: Journal started Jul 15 05:07:08.881156 systemd-journald[219]: Runtime Journal (/run/log/journal/4556c330b8404f56933eff9a810a8fc6) is 6M, max 48.6M, 42.5M free. Jul 15 05:07:08.870617 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 05:07:08.886457 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:07:08.888960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:07:08.897255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:07:08.898863 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 05:07:08.931772 kernel: Bridge firewalling registered Jul 15 05:07:08.926940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:07:08.939396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:08.941065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:08.943641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:07:08.947262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:07:08.954852 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:07:08.961725 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:08.962390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:08.963118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:08.971966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:07:08.972661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:07:08.975444 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:07:09.002332 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:07:09.019851 systemd-resolved[260]: Positive Trust Anchors: Jul 15 05:07:09.019870 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:07:09.019911 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:07:09.022729 systemd-resolved[260]: Defaulting to hostname 'linux'. Jul 15 05:07:09.023778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:07:09.028802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:09.134306 kernel: SCSI subsystem initialized Jul 15 05:07:09.170280 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:07:09.182272 kernel: iscsi: registered transport (tcp) Jul 15 05:07:09.352316 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:07:09.352404 kernel: QLogic iSCSI HBA Driver Jul 15 05:07:09.376598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:07:09.415849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:09.417939 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:07:09.490814 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:07:09.492736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:07:09.551273 kernel: raid6: avx2x4 gen() 29585 MB/s Jul 15 05:07:09.568259 kernel: raid6: avx2x2 gen() 30821 MB/s Jul 15 05:07:09.585509 kernel: raid6: avx2x1 gen() 18604 MB/s Jul 15 05:07:09.585537 kernel: raid6: using algorithm avx2x2 gen() 30821 MB/s Jul 15 05:07:09.603627 kernel: raid6: .... xor() 17654 MB/s, rmw enabled Jul 15 05:07:09.603681 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:07:09.624265 kernel: xor: automatically using best checksumming function avx Jul 15 05:07:09.844270 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:07:09.853252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:07:09.857055 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:09.899455 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 15 05:07:09.905921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:09.908194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:07:09.935629 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jul 15 05:07:09.969105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:07:09.972774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:07:10.062728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:10.066573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:07:10.114467 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:07:10.117251 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 05:07:10.128243 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 05:07:10.128567 kernel: libata version 3.00 loaded. Jul 15 05:07:10.139205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:07:10.139704 kernel: AES CTR mode by8 optimization enabled Jul 15 05:07:10.139365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:10.142904 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:07:10.143095 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:07:10.142569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:10.145469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:10.153161 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:07:10.153343 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:07:10.153489 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 05:07:10.153622 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:07:10.150027 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:10.158821 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:07:10.158877 kernel: GPT:9289727 != 19775487 Jul 15 05:07:10.158893 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:07:10.158903 kernel: GPT:9289727 != 19775487 Jul 15 05:07:10.158912 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:07:10.158923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:10.163291 kernel: scsi host0: ahci Jul 15 05:07:10.163485 kernel: scsi host1: ahci Jul 15 05:07:10.169363 kernel: scsi host2: ahci Jul 15 05:07:10.176413 kernel: scsi host3: ahci Jul 15 05:07:10.176663 kernel: scsi host4: ahci Jul 15 05:07:10.178255 kernel: scsi host5: ahci Jul 15 05:07:10.178429 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 05:07:10.179296 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 05:07:10.182034 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 05:07:10.182067 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 05:07:10.183336 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 05:07:10.183365 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 05:07:10.212632 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 05:07:10.231882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 05:07:10.232799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:10.243802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:07:10.258062 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 05:07:10.258543 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 05:07:10.264322 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:07:10.294944 disk-uuid[633]: Primary Header is updated. Jul 15 05:07:10.294944 disk-uuid[633]: Secondary Entries is updated. Jul 15 05:07:10.294944 disk-uuid[633]: Secondary Header is updated. Jul 15 05:07:10.299254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:10.305258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:10.489532 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 05:07:10.489610 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:10.489628 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:10.489641 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:10.491269 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:10.491351 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 05:07:10.492359 kernel: ata3.00: applying bridge limits Jul 15 05:07:10.493253 kernel: ata3.00: configured for UDMA/100 Jul 15 05:07:10.495284 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 05:07:10.498264 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:07:10.556276 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 05:07:10.556644 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 05:07:10.582282 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 05:07:10.984791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:07:10.986596 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:07:10.988815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:10.990070 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:07:10.991977 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:07:11.023955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:07:11.307285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:07:11.307452 disk-uuid[634]: The operation has completed successfully. Jul 15 05:07:11.341161 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:07:11.341335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:07:11.383573 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:07:11.402668 sh[662]: Success Jul 15 05:07:11.423247 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:07:11.423329 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:07:11.423345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:07:11.433323 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:07:11.468203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:07:11.471660 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:07:11.485161 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:07:11.490254 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:07:11.493174 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (674) Jul 15 05:07:11.493194 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:07:11.493205 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:11.494700 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:07:11.500487 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:07:11.501499 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:07:11.502857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:07:11.503926 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:07:11.506389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:07:11.531295 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Jul 15 05:07:11.533944 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:11.533968 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:11.533979 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:11.541275 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:11.542099 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:07:11.543976 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:07:11.699517 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:07:11.700304 ignition[746]: Ignition 2.21.0 Jul 15 05:07:11.700311 ignition[746]: Stage: fetch-offline Jul 15 05:07:11.702824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:07:11.700348 ignition[746]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:11.700356 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:11.700432 ignition[746]: parsed url from cmdline: "" Jul 15 05:07:11.700436 ignition[746]: no config URL provided Jul 15 05:07:11.700440 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:07:11.700448 ignition[746]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:07:11.700470 ignition[746]: op(1): [started] loading QEMU firmware config module Jul 15 05:07:11.700475 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 05:07:11.709322 ignition[746]: op(1): [finished] loading QEMU firmware config module Jul 15 05:07:11.745401 systemd-networkd[852]: lo: Link UP Jul 15 05:07:11.745412 systemd-networkd[852]: lo: Gained carrier Jul 15 05:07:11.746944 systemd-networkd[852]: Enumeration completed Jul 15 05:07:11.747070 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:07:11.748391 systemd[1]: Reached target network.target - Network. Jul 15 05:07:11.748741 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:11.748747 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:07:11.749279 systemd-networkd[852]: eth0: Link UP Jul 15 05:07:11.749282 systemd-networkd[852]: eth0: Gained carrier Jul 15 05:07:11.749291 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:11.990438 ignition[746]: parsing config with SHA512: 09bab963bb3edcf7c114e690f248c58894f2e295b4c40a25dbaaf3e3ec0eecea8bba0548c9fdbac6542a0fc81c11eb981b082aef096b375b601e48bd92677704 Jul 15 05:07:11.994485 unknown[746]: fetched base config from "system" Jul 15 05:07:11.994497 unknown[746]: fetched user config from "qemu" Jul 15 05:07:11.995042 ignition[746]: fetch-offline: fetch-offline passed Jul 15 05:07:11.995121 ignition[746]: Ignition finished successfully Jul 15 05:07:11.996300 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:07:11.998757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:07:12.001464 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 05:07:12.002382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:07:12.071920 ignition[857]: Ignition 2.21.0 Jul 15 05:07:12.071937 ignition[857]: Stage: kargs Jul 15 05:07:12.072171 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:12.072185 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:12.075528 ignition[857]: kargs: kargs passed Jul 15 05:07:12.075608 ignition[857]: Ignition finished successfully Jul 15 05:07:12.082383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:07:12.085788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:07:12.194732 ignition[865]: Ignition 2.21.0 Jul 15 05:07:12.194749 ignition[865]: Stage: disks Jul 15 05:07:12.194890 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:12.194900 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:12.198562 ignition[865]: disks: disks passed Jul 15 05:07:12.198615 ignition[865]: Ignition finished successfully Jul 15 05:07:12.202272 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:07:12.203551 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:07:12.205404 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:07:12.206645 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:07:12.207065 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:07:12.207559 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:07:12.208910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:07:12.232158 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.21 Jul 15 05:07:12.232181 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jul 15 05:07:12.234446 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:07:12.245592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:07:12.249756 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:07:12.387250 kernel: EXT4-fs (vda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:07:12.387983 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:07:12.406724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:07:12.409641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:07:12.411684 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:07:12.412794 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:07:12.412834 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:07:12.412861 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:07:12.431740 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:07:12.434746 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:07:12.435553 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Jul 15 05:07:12.435578 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:12.437698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:12.437716 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:12.443345 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:07:12.487192 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:07:12.491734 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:07:12.495795 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:07:12.500639 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:07:12.604957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:07:12.607731 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:07:12.609641 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:07:12.627715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:07:12.629382 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:12.642713 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:07:12.665357 ignition[997]: INFO : Ignition 2.21.0 Jul 15 05:07:12.666641 ignition[997]: INFO : Stage: mount Jul 15 05:07:12.666641 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:12.666641 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:12.670549 ignition[997]: INFO : mount: mount passed Jul 15 05:07:12.671451 ignition[997]: INFO : Ignition finished successfully Jul 15 05:07:12.675572 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:07:12.677947 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:07:12.703106 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:07:12.729255 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Jul 15 05:07:12.731822 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:07:12.731846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:07:12.731859 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:07:12.735665 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:07:12.773730 ignition[1026]: INFO : Ignition 2.21.0 Jul 15 05:07:12.773730 ignition[1026]: INFO : Stage: files Jul 15 05:07:12.775594 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:12.775594 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:12.777837 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:07:12.779269 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:07:12.779269 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:07:12.782684 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:07:12.784472 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:07:12.786133 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:07:12.784975 unknown[1026]: wrote ssh authorized keys file for user: core Jul 15 05:07:12.789065 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:07:12.789065 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 05:07:12.834251 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:07:12.976945 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 05:07:12.976945 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:07:12.981404 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:07:12.983306 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:07:12.985539 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:07:12.987451 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:07:12.989642 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:07:12.992073 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:07:12.994495 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:07:13.001165 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:07:13.003638 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:07:13.005905 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:07:13.008849 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:07:13.008849 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:07:13.008849 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 05:07:13.514419 systemd-networkd[852]: eth0: Gained IPv6LL Jul 15 05:07:13.855420 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 05:07:15.309622 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 05:07:15.309622 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 05:07:15.313664 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:07:15.322040 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:07:15.322040 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 05:07:15.322040 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 15 05:07:15.322040 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:07:15.329442 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:07:15.329442 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 15 05:07:15.329442 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 05:07:15.411879 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:07:15.418305 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:07:15.420537 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 05:07:15.420537 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:07:15.423471 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:07:15.424852 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:07:15.426668 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:07:15.428402 ignition[1026]: INFO : files: files passed Jul 15 05:07:15.429275 ignition[1026]: INFO : Ignition finished successfully Jul 15 05:07:15.432887 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:07:15.435999 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:07:15.439415 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:07:15.456381 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:07:15.457486 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 05:07:15.457608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:07:15.461789 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:15.461789 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:15.464991 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:07:15.464122 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:07:15.466744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:07:15.469809 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:07:15.533309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:07:15.533474 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:07:15.535426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:07:15.535889 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:07:15.536285 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:07:15.537367 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:07:15.574721 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:07:15.577484 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:07:15.606243 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:15.606915 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:15.607347 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:07:15.607921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:07:15.608075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:07:15.617304 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:07:15.617801 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:07:15.618139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:07:15.618653 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:07:15.619002 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:07:15.619495 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:07:15.619823 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:07:15.620171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:07:15.620679 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:07:15.621010 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:07:15.621498 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:07:15.621788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:07:15.621937 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:07:15.641430 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:15.642009 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:15.642499 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:07:15.645732 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:15.646528 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:07:15.646697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:07:15.652093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:07:15.652263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:07:15.652955 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:07:15.657166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:07:15.657423 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:15.658038 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:07:15.662884 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:07:15.663702 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:07:15.663804 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:07:15.665795 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:07:15.665929 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:07:15.667732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:07:15.667909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:07:15.672093 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:07:15.672310 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:07:15.676648 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:07:15.677098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:07:15.677266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:15.678872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:07:15.682033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:07:15.683271 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:15.684790 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:07:15.684945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:07:15.694451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:07:15.697490 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:07:15.715556 ignition[1081]: INFO : Ignition 2.21.0 Jul 15 05:07:15.715556 ignition[1081]: INFO : Stage: umount Jul 15 05:07:15.717803 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:07:15.717803 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:07:15.717803 ignition[1081]: INFO : umount: umount passed Jul 15 05:07:15.717803 ignition[1081]: INFO : Ignition finished successfully Jul 15 05:07:15.719192 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:07:15.719362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:07:15.720030 systemd[1]: Stopped target network.target - Network. Jul 15 05:07:15.723187 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:07:15.723903 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:07:15.724964 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:07:15.725035 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:07:15.725448 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:07:15.725498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:07:15.725774 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:07:15.725815 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:07:15.726243 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:07:15.726721 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:07:15.728032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:07:15.740441 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:07:15.740570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:07:15.741635 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:07:15.741744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:07:15.751250 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:07:15.751420 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:07:15.756964 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:07:15.757214 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:07:15.757382 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:07:15.761553 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:07:15.762699 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:07:15.765562 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:07:15.765624 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:15.767043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:07:15.770409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:07:15.771579 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:07:15.772086 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:07:15.772135 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:15.775188 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:07:15.775260 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:15.775751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:07:15.775793 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:15.780721 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:15.782848 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:07:15.782928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:15.803204 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:07:15.803400 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:07:15.805250 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:07:15.805452 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:15.806945 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:07:15.807000 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:15.808798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:07:15.808842 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:15.811493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:07:15.811556 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:07:15.812208 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:07:15.812275 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:07:15.812985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:07:15.813041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:07:15.821991 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:07:15.822633 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:07:15.822694 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:15.827415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:07:15.827481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:15.830973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:07:15.831035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:15.836122 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:07:15.836194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:07:15.836268 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:07:15.858290 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:07:15.858457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:07:15.859323 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:07:15.863981 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:07:15.890419 systemd[1]: Switching root. Jul 15 05:07:15.922118 systemd-journald[219]: Journal stopped Jul 15 05:07:17.373321 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 15 05:07:17.373392 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:07:17.373411 kernel: SELinux: policy capability open_perms=1 Jul 15 05:07:17.373423 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:07:17.373441 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:07:17.373452 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:07:17.373463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:07:17.373479 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:07:17.373490 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:07:17.373501 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:07:17.373512 kernel: audit: type=1403 audit(1752556036.549:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:07:17.373529 systemd[1]: Successfully loaded SELinux policy in 67.089ms. Jul 15 05:07:17.373554 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.107ms. Jul 15 05:07:17.373573 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:07:17.373586 systemd[1]: Detected virtualization kvm. Jul 15 05:07:17.373604 systemd[1]: Detected architecture x86-64. Jul 15 05:07:17.373616 systemd[1]: Detected first boot. Jul 15 05:07:17.373629 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:07:17.373646 zram_generator::config[1126]: No configuration found. Jul 15 05:07:17.373660 kernel: Guest personality initialized and is inactive Jul 15 05:07:17.373675 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:07:17.373696 kernel: Initialized host personality Jul 15 05:07:17.373712 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:07:17.373728 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:07:17.373746 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:07:17.373761 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:07:17.373774 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:07:17.373786 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:07:17.373799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:07:17.373818 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:07:17.373831 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:07:17.373842 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:07:17.373864 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:07:17.373876 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:07:17.373890 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:07:17.373902 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:07:17.373918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:07:17.373931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:07:17.373948 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:07:17.373960 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:07:17.373973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:07:17.373986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:07:17.373998 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:07:17.374010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:07:17.374023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:07:17.374035 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:07:17.374052 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:07:17.374064 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:07:17.374076 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:07:17.374088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:07:17.374100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:07:17.374112 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:07:17.374124 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:07:17.374136 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:07:17.374149 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:07:17.374170 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:07:17.374192 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:07:17.374215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:07:17.374247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:07:17.374263 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:07:17.374276 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:07:17.374288 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:07:17.374300 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:07:17.374313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.374331 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:07:17.374345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:07:17.374361 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:07:17.374377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:07:17.374392 systemd[1]: Reached target machines.target - Containers. Jul 15 05:07:17.374409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:07:17.374423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:17.374435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:07:17.374454 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:07:17.374467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:17.374479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:07:17.374492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:17.374504 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:07:17.374516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:17.374533 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:07:17.374545 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:07:17.374557 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:07:17.374573 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:07:17.374586 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:07:17.374598 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:17.374611 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:07:17.374623 kernel: fuse: init (API version 7.41) Jul 15 05:07:17.374634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:07:17.374646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:07:17.374659 kernel: loop: module loaded Jul 15 05:07:17.374710 systemd-journald[1194]: Collecting audit messages is disabled. Jul 15 05:07:17.374755 systemd-journald[1194]: Journal started Jul 15 05:07:17.374778 systemd-journald[1194]: Runtime Journal (/run/log/journal/4556c330b8404f56933eff9a810a8fc6) is 6M, max 48.6M, 42.5M free. Jul 15 05:07:17.122398 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:07:17.151134 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 05:07:17.151905 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:07:17.379261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:07:17.382249 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:07:17.388579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:07:17.388697 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:07:17.389550 systemd[1]: Stopped verity-setup.service. Jul 15 05:07:17.394255 kernel: ACPI: bus type drm_connector registered Jul 15 05:07:17.394338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:17.402796 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:07:17.404817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:07:17.406355 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:07:17.407890 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:07:17.409247 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:07:17.410677 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:07:17.412203 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:07:17.413745 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:07:17.415687 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:07:17.417374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:07:17.417690 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:07:17.419442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:17.419727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:17.421395 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:07:17.421843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:07:17.423628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:17.423898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:17.425619 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:07:17.425956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:07:17.427472 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:17.427741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:17.429288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:07:17.431127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:07:17.432832 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:07:17.434654 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:07:17.450521 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:07:17.453700 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:07:17.456305 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:07:17.457569 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:07:17.457660 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:07:17.459934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:07:17.466423 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:07:17.467760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:17.472155 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:07:17.476058 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:07:17.478434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:07:17.480013 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:07:17.481568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:07:17.485421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:07:17.491580 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:07:17.495081 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:07:17.498028 systemd-journald[1194]: Time spent on flushing to /var/log/journal/4556c330b8404f56933eff9a810a8fc6 is 25.499ms for 979 entries. Jul 15 05:07:17.498028 systemd-journald[1194]: System Journal (/var/log/journal/4556c330b8404f56933eff9a810a8fc6) is 8M, max 195.6M, 187.6M free. Jul 15 05:07:17.598136 systemd-journald[1194]: Received client request to flush runtime journal. Jul 15 05:07:17.499754 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:07:17.501413 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:07:17.512145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:07:17.518261 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:07:17.526103 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:07:17.528112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:07:17.599508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:07:17.602268 kernel: loop0: detected capacity change from 0 to 229808 Jul 15 05:07:17.605938 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:07:17.626361 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:07:17.638768 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:07:17.643815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:07:17.690620 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:07:17.707321 kernel: loop1: detected capacity change from 0 to 114000 Jul 15 05:07:17.713873 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 15 05:07:17.713894 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 15 05:07:17.721740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:07:17.801279 kernel: loop2: detected capacity change from 0 to 146488 Jul 15 05:07:17.881463 kernel: loop3: detected capacity change from 0 to 229808 Jul 15 05:07:17.919273 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:07:17.936208 kernel: loop5: detected capacity change from 0 to 146488 Jul 15 05:07:18.097965 (sd-merge)[1268]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 05:07:18.098888 (sd-merge)[1268]: Merged extensions into '/usr'. Jul 15 05:07:18.110877 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:07:18.110900 systemd[1]: Reloading... Jul 15 05:07:18.186255 zram_generator::config[1291]: No configuration found. Jul 15 05:07:18.324137 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:07:18.395399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:18.490364 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:07:18.491050 systemd[1]: Reloading finished in 378 ms. Jul 15 05:07:18.529264 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:07:18.530924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:07:18.547964 systemd[1]: Starting ensure-sysext.service... Jul 15 05:07:18.550221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:07:18.652453 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:07:18.652648 systemd[1]: Reloading... Jul 15 05:07:18.660903 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:07:18.660962 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:07:18.661371 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:07:18.661674 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:07:18.662646 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:07:18.662941 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 15 05:07:18.663014 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Jul 15 05:07:18.667918 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:07:18.667930 systemd-tmpfiles[1332]: Skipping /boot Jul 15 05:07:18.734664 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:07:18.734683 systemd-tmpfiles[1332]: Skipping /boot Jul 15 05:07:18.737257 zram_generator::config[1359]: No configuration found. Jul 15 05:07:18.871437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:18.960327 systemd[1]: Reloading finished in 307 ms. Jul 15 05:07:18.980515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:07:19.006657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:07:19.016481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:07:19.018904 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:07:19.021388 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:07:19.037117 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:07:19.041244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:07:19.044986 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:07:19.049426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.049604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:19.052635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:19.055469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:19.058003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:19.059432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:19.059538 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:19.061666 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:07:19.062766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.069318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.069533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:19.069741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:19.069883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:19.070015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.076012 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:07:19.078548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:19.078756 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:19.080411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:19.080612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:19.082462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:07:19.084763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:19.085049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:19.101589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.101927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:07:19.103048 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Jul 15 05:07:19.103340 augenrules[1432]: No rules Jul 15 05:07:19.104421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:07:19.108702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:07:19.124465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:07:19.127751 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:07:19.129395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:07:19.130620 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:07:19.134457 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:07:19.135889 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:07:19.138401 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:07:19.140306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:07:19.142645 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:07:19.142992 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:07:19.145084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:07:19.145372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:07:19.147575 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:07:19.149606 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:07:19.149875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:07:19.152000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:07:19.152439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:07:19.154490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:07:19.154747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:07:19.162786 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:07:19.164834 systemd[1]: Finished ensure-sysext.service. Jul 15 05:07:19.181366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:07:19.182933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:07:19.183023 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:07:19.187621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:07:19.188947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:07:19.242936 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:07:19.321278 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:07:19.331173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:07:19.335510 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:07:19.348260 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 05:07:19.354276 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:07:19.368549 systemd-resolved[1401]: Positive Trust Anchors: Jul 15 05:07:19.368568 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:07:19.368599 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:07:19.371062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:07:19.373965 systemd-resolved[1401]: Defaulting to hostname 'linux'. Jul 15 05:07:19.376636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:07:19.377975 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:07:19.387870 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:07:19.388206 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:07:19.428456 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:07:19.429843 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:07:19.431284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:07:19.433761 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:07:19.435022 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:07:19.437021 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:07:19.438443 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:07:19.438491 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:07:19.439520 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:07:19.440966 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:07:19.494592 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:07:19.495900 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:07:19.498479 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:07:19.501814 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:07:19.508782 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:07:19.512359 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:07:19.513950 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:07:19.524575 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:07:19.526686 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:07:19.529183 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:07:19.532119 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:07:19.534418 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:07:19.535420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:07:19.535447 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:07:19.537462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:07:19.545024 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:07:19.553606 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:07:19.557271 systemd-networkd[1481]: lo: Link UP Jul 15 05:07:19.557279 systemd-networkd[1481]: lo: Gained carrier Jul 15 05:07:19.561439 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:07:19.561723 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:07:19.564383 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:07:19.566660 systemd-networkd[1481]: Enumeration completed Jul 15 05:07:19.568450 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:07:19.571641 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:07:19.572408 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:19.572421 systemd-networkd[1481]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:07:19.573412 systemd-networkd[1481]: eth0: Link UP Jul 15 05:07:19.573666 systemd-networkd[1481]: eth0: Gained carrier Jul 15 05:07:19.573688 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:07:19.575357 jq[1516]: false Jul 15 05:07:19.577466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:07:19.580529 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:07:19.586689 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:07:19.588392 systemd-networkd[1481]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:07:19.588641 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:07:19.592084 systemd-timesyncd[1483]: Network configuration changed, trying to establish connection. Jul 15 05:07:19.592921 systemd-timesyncd[1483]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 05:07:19.593004 systemd-timesyncd[1483]: Initial clock synchronization to Tue 2025-07-15 05:07:19.839478 UTC. Jul 15 05:07:19.593595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:07:19.594450 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:07:19.597708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:07:19.599851 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:07:19.602146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:07:19.605064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:07:19.605508 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:07:19.615907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:07:19.618593 extend-filesystems[1517]: Found /dev/vda6 Jul 15 05:07:19.617327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:07:19.631329 extend-filesystems[1517]: Found /dev/vda9 Jul 15 05:07:19.633700 systemd[1]: Reached target network.target - Network. Jul 15 05:07:19.634937 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing passwd entry cache Jul 15 05:07:19.637309 oslogin_cache_refresh[1518]: Refreshing passwd entry cache Jul 15 05:07:19.637590 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:07:19.642106 update_engine[1527]: I20250715 05:07:19.640998 1527 main.cc:92] Flatcar Update Engine starting Jul 15 05:07:19.641815 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:07:19.643872 extend-filesystems[1517]: Checking size of /dev/vda9 Jul 15 05:07:19.653929 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting users, quitting Jul 15 05:07:19.653929 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:07:19.653929 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing group entry cache Jul 15 05:07:19.651718 oslogin_cache_refresh[1518]: Failure getting users, quitting Jul 15 05:07:19.651742 oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:07:19.651820 oslogin_cache_refresh[1518]: Refreshing group entry cache Jul 15 05:07:19.662674 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting groups, quitting Jul 15 05:07:19.662674 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:07:19.662330 oslogin_cache_refresh[1518]: Failure getting groups, quitting Jul 15 05:07:19.662348 oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:07:19.755123 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:07:19.758165 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:07:19.758592 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:07:19.761075 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:07:19.784553 tar[1533]: linux-amd64/LICENSE Jul 15 05:07:19.784553 tar[1533]: linux-amd64/helm Jul 15 05:07:19.771834 dbus-daemon[1514]: [system] SELinux support is enabled Jul 15 05:07:19.761413 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:07:19.773112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:07:19.774801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:07:19.780578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:07:19.780700 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:07:19.782391 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:07:19.782574 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:07:19.788269 update_engine[1527]: I20250715 05:07:19.786353 1527 update_check_scheduler.cc:74] Next update check in 2m47s Jul 15 05:07:19.789097 jq[1529]: true Jul 15 05:07:19.800996 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:07:19.810033 extend-filesystems[1517]: Resized partition /dev/vda9 Jul 15 05:07:19.813961 extend-filesystems[1567]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:07:19.876878 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:07:19.879455 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:07:19.880482 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:07:19.885116 jq[1562]: true Jul 15 05:07:19.887246 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 05:07:19.918259 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 05:07:20.036692 kernel: kvm_amd: TSC scaling supported Jul 15 05:07:20.036730 kernel: kvm_amd: Nested Virtualization enabled Jul 15 05:07:20.036745 kernel: kvm_amd: Nested Paging enabled Jul 15 05:07:20.036759 kernel: kvm_amd: LBR virtualization supported Jul 15 05:07:20.036772 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 05:07:20.036785 kernel: kvm_amd: Virtual GIF supported Jul 15 05:07:20.035429 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:07:20.036903 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 05:07:20.036903 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:07:20.036903 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 05:07:20.035741 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:07:20.037917 extend-filesystems[1517]: Resized filesystem in /dev/vda9 Jul 15 05:07:20.039869 systemd-logind[1524]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:07:20.040206 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:07:20.041142 systemd-logind[1524]: New seat seat0. Jul 15 05:07:20.113583 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:07:20.122130 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:07:20.129318 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:07:20.144623 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:07:20.270099 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:07:20.272054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:07:20.274337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:07:20.276406 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:07:20.290109 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:07:20.291962 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 05:07:20.318501 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:07:20.318993 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:07:20.320824 containerd[1563]: time="2025-07-15T05:07:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:07:20.321769 containerd[1563]: time="2025-07-15T05:07:20.321677932Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:07:20.324324 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:07:20.360078 containerd[1563]: time="2025-07-15T05:07:20.360018305Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.867µs" Jul 15 05:07:20.360078 containerd[1563]: time="2025-07-15T05:07:20.360070284Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:07:20.360176 containerd[1563]: time="2025-07-15T05:07:20.360097311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360359181Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360382562Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360419948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360504285Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360519001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360888380Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360906330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360921542Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.360931984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361282 containerd[1563]: time="2025-07-15T05:07:20.361064415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361510 containerd[1563]: time="2025-07-15T05:07:20.361382414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361510 containerd[1563]: time="2025-07-15T05:07:20.361425863Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:07:20.361510 containerd[1563]: time="2025-07-15T05:07:20.361437792Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:07:20.361510 containerd[1563]: time="2025-07-15T05:07:20.361491413Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:07:20.361890 containerd[1563]: time="2025-07-15T05:07:20.361860927Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:07:20.361965 containerd[1563]: time="2025-07-15T05:07:20.361941917Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369097320Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369172422Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369190113Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369272580Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369436790Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369454326Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369487085Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369506935Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369522427Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369536235Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369547327Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369581026Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:07:20.369984 containerd[1563]: time="2025-07-15T05:07:20.369962468Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370031829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370056636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370070227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370103988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370119232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370135602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370148800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370189305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370205458Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370219266Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370397780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:07:20.370439 containerd[1563]: time="2025-07-15T05:07:20.370444894Z" level=info msg="Start snapshots syncer" Jul 15 05:07:20.370766 containerd[1563]: time="2025-07-15T05:07:20.370516083Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:07:20.372614 containerd[1563]: time="2025-07-15T05:07:20.372544056Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:07:20.372889 containerd[1563]: time="2025-07-15T05:07:20.372652537Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:07:20.372889 containerd[1563]: time="2025-07-15T05:07:20.372841172Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:07:20.373109 containerd[1563]: time="2025-07-15T05:07:20.373070870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:07:20.373151 containerd[1563]: time="2025-07-15T05:07:20.373128952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:07:20.373199 containerd[1563]: time="2025-07-15T05:07:20.373147016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:07:20.373199 containerd[1563]: time="2025-07-15T05:07:20.373168343Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:07:20.373199 containerd[1563]: time="2025-07-15T05:07:20.373196981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:07:20.373300 containerd[1563]: time="2025-07-15T05:07:20.373218153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:07:20.373300 containerd[1563]: time="2025-07-15T05:07:20.373233696Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:07:20.373355 containerd[1563]: time="2025-07-15T05:07:20.373314593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:07:20.373355 containerd[1563]: time="2025-07-15T05:07:20.373339142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:07:20.373427 containerd[1563]: time="2025-07-15T05:07:20.373359735Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:07:20.373427 containerd[1563]: time="2025-07-15T05:07:20.373408451Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:07:20.373480 containerd[1563]: time="2025-07-15T05:07:20.373428620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:07:20.373583 containerd[1563]: time="2025-07-15T05:07:20.373446012Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:07:20.373583 containerd[1563]: time="2025-07-15T05:07:20.373554133Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:07:20.373583 containerd[1563]: time="2025-07-15T05:07:20.373575604Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:07:20.373687 containerd[1563]: time="2025-07-15T05:07:20.373590486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:07:20.373687 containerd[1563]: time="2025-07-15T05:07:20.373611988Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:07:20.373687 containerd[1563]: time="2025-07-15T05:07:20.373662759Z" level=info msg="runtime interface created" Jul 15 05:07:20.373687 containerd[1563]: time="2025-07-15T05:07:20.373672116Z" level=info msg="created NRI interface" Jul 15 05:07:20.373798 containerd[1563]: time="2025-07-15T05:07:20.373691480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:07:20.373798 containerd[1563]: time="2025-07-15T05:07:20.373714677Z" level=info msg="Connect containerd service" Jul 15 05:07:20.373798 containerd[1563]: time="2025-07-15T05:07:20.373753199Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:07:20.375721 containerd[1563]: time="2025-07-15T05:07:20.375695121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:07:20.388543 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:07:20.394022 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:07:20.399948 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:07:20.401588 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:07:20.632122 containerd[1563]: time="2025-07-15T05:07:20.631962742Z" level=info msg="Start subscribing containerd event" Jul 15 05:07:20.632230 containerd[1563]: time="2025-07-15T05:07:20.632106328Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:07:20.632230 containerd[1563]: time="2025-07-15T05:07:20.632183341Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632347128Z" level=info msg="Start recovering state" Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632592307Z" level=info msg="Start event monitor" Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632620863Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632631749Z" level=info msg="Start streaming server" Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632642726Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632652899Z" level=info msg="runtime interface starting up..." Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632661295Z" level=info msg="starting plugins..." Jul 15 05:07:20.632896 containerd[1563]: time="2025-07-15T05:07:20.632694086Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:07:20.634372 containerd[1563]: time="2025-07-15T05:07:20.633789635Z" level=info msg="containerd successfully booted in 0.313801s" Jul 15 05:07:20.633013 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:07:20.645962 tar[1533]: linux-amd64/README.md Jul 15 05:07:20.674776 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:07:21.195057 systemd-networkd[1481]: eth0: Gained IPv6LL Jul 15 05:07:21.198620 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:07:21.200829 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:07:21.203834 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 05:07:21.206501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:21.209058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:07:21.255905 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:07:21.274929 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 05:07:21.275238 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 05:07:21.277013 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:07:22.652705 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:07:22.655921 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:55624.service - OpenSSH per-connection server daemon (10.0.0.1:55624). Jul 15 05:07:22.778795 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 55624 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:22.781126 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:22.793862 systemd-logind[1524]: New session 1 of user core. Jul 15 05:07:22.795330 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:07:22.798007 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:07:22.846585 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:07:22.851854 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:07:22.879599 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:07:22.882982 systemd-logind[1524]: New session c1 of user core. Jul 15 05:07:23.044297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:23.046192 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:07:23.061680 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:23.160534 systemd[1663]: Queued start job for default target default.target. Jul 15 05:07:23.185048 systemd[1663]: Created slice app.slice - User Application Slice. Jul 15 05:07:23.185088 systemd[1663]: Reached target paths.target - Paths. Jul 15 05:07:23.185138 systemd[1663]: Reached target timers.target - Timers. Jul 15 05:07:23.187016 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:07:23.201772 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:07:23.201919 systemd[1663]: Reached target sockets.target - Sockets. Jul 15 05:07:23.201974 systemd[1663]: Reached target basic.target - Basic System. Jul 15 05:07:23.202014 systemd[1663]: Reached target default.target - Main User Target. Jul 15 05:07:23.202048 systemd[1663]: Startup finished in 305ms. Jul 15 05:07:23.202595 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:07:23.213465 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:07:23.215117 systemd[1]: Startup finished in 3.870s (kernel) + 7.865s (initrd) + 6.731s (userspace) = 18.466s. Jul 15 05:07:23.343987 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:55632.service - OpenSSH per-connection server daemon (10.0.0.1:55632). Jul 15 05:07:23.442613 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 55632 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:23.444638 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:23.449967 systemd-logind[1524]: New session 2 of user core. Jul 15 05:07:23.466425 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:07:23.524636 sshd[1692]: Connection closed by 10.0.0.1 port 55632 Jul 15 05:07:23.526982 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:23.535850 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:55632.service: Deactivated successfully. Jul 15 05:07:23.538112 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:07:23.539040 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:07:23.542530 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:55646.service - OpenSSH per-connection server daemon (10.0.0.1:55646). Jul 15 05:07:23.543236 systemd-logind[1524]: Removed session 2. Jul 15 05:07:23.608962 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 55646 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:23.611050 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:23.616624 systemd-logind[1524]: New session 3 of user core. Jul 15 05:07:23.625507 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:07:23.678290 sshd[1703]: Connection closed by 10.0.0.1 port 55646 Jul 15 05:07:23.679002 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:23.688177 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:55646.service: Deactivated successfully. Jul 15 05:07:23.690270 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:07:23.690997 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:07:23.694215 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:55654.service - OpenSSH per-connection server daemon (10.0.0.1:55654). Jul 15 05:07:23.695410 systemd-logind[1524]: Removed session 3. Jul 15 05:07:23.833136 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 55654 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:23.835325 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:23.842336 systemd-logind[1524]: New session 4 of user core. Jul 15 05:07:23.890744 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:07:23.955766 sshd[1712]: Connection closed by 10.0.0.1 port 55654 Jul 15 05:07:23.956156 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:23.965931 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:55654.service: Deactivated successfully. Jul 15 05:07:23.968127 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:07:23.969017 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:07:23.972408 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:55662.service - OpenSSH per-connection server daemon (10.0.0.1:55662). Jul 15 05:07:23.973599 systemd-logind[1524]: Removed session 4. Jul 15 05:07:23.986270 kubelet[1674]: E0715 05:07:23.986182 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:23.991142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:23.991383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:23.991827 systemd[1]: kubelet.service: Consumed 2.379s CPU time, 268.7M memory peak. Jul 15 05:07:24.034388 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 55662 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:24.036667 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:24.041670 systemd-logind[1524]: New session 5 of user core. Jul 15 05:07:24.054600 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:07:24.117960 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:07:24.118312 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:24.146196 sudo[1723]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:24.148029 sshd[1722]: Connection closed by 10.0.0.1 port 55662 Jul 15 05:07:24.148520 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:24.159309 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:55662.service: Deactivated successfully. Jul 15 05:07:24.161470 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:07:24.162372 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:07:24.165031 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:55674.service - OpenSSH per-connection server daemon (10.0.0.1:55674). Jul 15 05:07:24.165787 systemd-logind[1524]: Removed session 5. Jul 15 05:07:24.237259 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 55674 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:24.238916 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:24.244516 systemd-logind[1524]: New session 6 of user core. Jul 15 05:07:24.254572 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:07:24.313604 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:07:24.313978 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:24.326326 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:24.333949 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:07:24.334326 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:24.345313 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:07:24.392534 augenrules[1756]: No rules Jul 15 05:07:24.394221 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:07:24.394566 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:07:24.396035 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 15 05:07:24.397963 sshd[1732]: Connection closed by 10.0.0.1 port 55674 Jul 15 05:07:24.398284 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 15 05:07:24.410673 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:55674.service: Deactivated successfully. Jul 15 05:07:24.412735 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:07:24.413669 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:07:24.417031 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676). Jul 15 05:07:24.417787 systemd-logind[1524]: Removed session 6. Jul 15 05:07:24.478803 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:07:24.480550 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:07:24.485382 systemd-logind[1524]: New session 7 of user core. Jul 15 05:07:24.495366 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:07:24.551397 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:07:24.551767 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:07:25.381937 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:07:25.403884 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:07:25.918671 dockerd[1791]: time="2025-07-15T05:07:25.918595252Z" level=info msg="Starting up" Jul 15 05:07:25.919581 dockerd[1791]: time="2025-07-15T05:07:25.919560011Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:07:25.968219 dockerd[1791]: time="2025-07-15T05:07:25.968142376Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:07:28.571147 dockerd[1791]: time="2025-07-15T05:07:28.571058192Z" level=info msg="Loading containers: start." Jul 15 05:07:28.585266 kernel: Initializing XFRM netlink socket Jul 15 05:07:28.920418 systemd-networkd[1481]: docker0: Link UP Jul 15 05:07:28.927749 dockerd[1791]: time="2025-07-15T05:07:28.927687575Z" level=info msg="Loading containers: done." Jul 15 05:07:28.943198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2664968621-merged.mount: Deactivated successfully. Jul 15 05:07:28.945422 dockerd[1791]: time="2025-07-15T05:07:28.945366230Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:07:28.945510 dockerd[1791]: time="2025-07-15T05:07:28.945492792Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:07:28.945637 dockerd[1791]: time="2025-07-15T05:07:28.945613966Z" level=info msg="Initializing buildkit" Jul 15 05:07:28.978349 dockerd[1791]: time="2025-07-15T05:07:28.978139255Z" level=info msg="Completed buildkit initialization" Jul 15 05:07:28.985059 dockerd[1791]: time="2025-07-15T05:07:28.984999059Z" level=info msg="Daemon has completed initialization" Jul 15 05:07:28.985254 dockerd[1791]: time="2025-07-15T05:07:28.985102526Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:07:28.985422 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:07:30.020499 containerd[1563]: time="2025-07-15T05:07:30.020422869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 05:07:30.721457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818020258.mount: Deactivated successfully. Jul 15 05:07:32.380787 containerd[1563]: time="2025-07-15T05:07:32.380706836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:32.381448 containerd[1563]: time="2025-07-15T05:07:32.381405571Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 15 05:07:32.382905 containerd[1563]: time="2025-07-15T05:07:32.382877155Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:32.387660 containerd[1563]: time="2025-07-15T05:07:32.387580625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:32.388645 containerd[1563]: time="2025-07-15T05:07:32.388599471Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.368097498s" Jul 15 05:07:32.388712 containerd[1563]: time="2025-07-15T05:07:32.388655754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 15 05:07:32.389645 containerd[1563]: time="2025-07-15T05:07:32.389616210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 05:07:34.242263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:07:34.246138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:35.003472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:35.007492 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:35.574279 kubelet[2077]: E0715 05:07:35.574190 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:35.580680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:35.580868 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:35.581223 systemd[1]: kubelet.service: Consumed 376ms CPU time, 111.5M memory peak. Jul 15 05:07:36.252115 containerd[1563]: time="2025-07-15T05:07:36.252021885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:36.253256 containerd[1563]: time="2025-07-15T05:07:36.253190295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 15 05:07:36.255143 containerd[1563]: time="2025-07-15T05:07:36.255023982Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:36.258410 containerd[1563]: time="2025-07-15T05:07:36.258327684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:36.259761 containerd[1563]: time="2025-07-15T05:07:36.259656817Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.8700009s" Jul 15 05:07:36.259761 containerd[1563]: time="2025-07-15T05:07:36.259742568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 15 05:07:36.260571 containerd[1563]: time="2025-07-15T05:07:36.260522282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 05:07:37.622222 containerd[1563]: time="2025-07-15T05:07:37.622102491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:37.623111 containerd[1563]: time="2025-07-15T05:07:37.622977505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 15 05:07:37.624772 containerd[1563]: time="2025-07-15T05:07:37.624728968Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:37.627754 containerd[1563]: time="2025-07-15T05:07:37.627689661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:37.629246 containerd[1563]: time="2025-07-15T05:07:37.629165344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.368592885s" Jul 15 05:07:37.629332 containerd[1563]: time="2025-07-15T05:07:37.629259931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 15 05:07:37.630432 containerd[1563]: time="2025-07-15T05:07:37.630357185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 05:07:39.134477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735228506.mount: Deactivated successfully. Jul 15 05:07:40.363126 containerd[1563]: time="2025-07-15T05:07:40.363025924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.365306 containerd[1563]: time="2025-07-15T05:07:40.365260251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 15 05:07:40.367394 containerd[1563]: time="2025-07-15T05:07:40.367362671Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.369930 containerd[1563]: time="2025-07-15T05:07:40.369884749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:40.370653 containerd[1563]: time="2025-07-15T05:07:40.370564397Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.740137482s" Jul 15 05:07:40.370653 containerd[1563]: time="2025-07-15T05:07:40.370630090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 15 05:07:40.371361 containerd[1563]: time="2025-07-15T05:07:40.371296967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 05:07:40.914866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442834580.mount: Deactivated successfully. Jul 15 05:07:43.120857 containerd[1563]: time="2025-07-15T05:07:43.120772017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:43.222021 containerd[1563]: time="2025-07-15T05:07:43.221954437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 15 05:07:43.340525 containerd[1563]: time="2025-07-15T05:07:43.340434973Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:43.400881 containerd[1563]: time="2025-07-15T05:07:43.400689555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:43.402177 containerd[1563]: time="2025-07-15T05:07:43.402128723Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.030800668s" Jul 15 05:07:43.402177 containerd[1563]: time="2025-07-15T05:07:43.402165545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 05:07:43.402798 containerd[1563]: time="2025-07-15T05:07:43.402755371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:07:44.159711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847670843.mount: Deactivated successfully. Jul 15 05:07:44.174245 containerd[1563]: time="2025-07-15T05:07:44.174121793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:44.228183 containerd[1563]: time="2025-07-15T05:07:44.228029656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:07:44.321033 containerd[1563]: time="2025-07-15T05:07:44.320939919Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:44.324586 containerd[1563]: time="2025-07-15T05:07:44.324516031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:07:44.324965 containerd[1563]: time="2025-07-15T05:07:44.324930615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 922.143021ms" Jul 15 05:07:44.325023 containerd[1563]: time="2025-07-15T05:07:44.324969748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:07:44.325621 containerd[1563]: time="2025-07-15T05:07:44.325562347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 05:07:44.801981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913658664.mount: Deactivated successfully. Jul 15 05:07:45.742416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:07:45.745192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:46.037692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:46.055295 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:07:46.234755 kubelet[2213]: E0715 05:07:46.233504 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:07:46.239570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:07:46.239816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:07:46.240475 systemd[1]: kubelet.service: Consumed 384ms CPU time, 108.7M memory peak. Jul 15 05:07:47.465045 containerd[1563]: time="2025-07-15T05:07:47.464973552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.465662 containerd[1563]: time="2025-07-15T05:07:47.465621308Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 15 05:07:47.469065 containerd[1563]: time="2025-07-15T05:07:47.469012158Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.472581 containerd[1563]: time="2025-07-15T05:07:47.472529661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:07:47.473579 containerd[1563]: time="2025-07-15T05:07:47.473535377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.147942037s" Jul 15 05:07:47.473579 containerd[1563]: time="2025-07-15T05:07:47.473574182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 05:07:50.828819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:50.828985 systemd[1]: kubelet.service: Consumed 384ms CPU time, 108.7M memory peak. Jul 15 05:07:50.831173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:50.854290 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-7.scope)... Jul 15 05:07:50.854306 systemd[1]: Reloading... Jul 15 05:07:50.950452 zram_generator::config[2296]: No configuration found. Jul 15 05:07:51.414666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:07:51.546899 systemd[1]: Reloading finished in 692 ms. Jul 15 05:07:51.615954 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:07:51.616063 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:07:51.616402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:51.616446 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.2M memory peak. Jul 15 05:07:51.617905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:07:51.812946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:07:51.830813 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:07:51.879498 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:07:51.879498 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:07:51.879498 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:07:51.879947 kubelet[2346]: I0715 05:07:51.879575 2346 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:07:52.151625 kubelet[2346]: I0715 05:07:52.151485 2346 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:07:52.151625 kubelet[2346]: I0715 05:07:52.151519 2346 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:07:52.151864 kubelet[2346]: I0715 05:07:52.151834 2346 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:07:52.183616 kubelet[2346]: I0715 05:07:52.183548 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:07:52.192649 kubelet[2346]: E0715 05:07:52.192602 2346 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 05:07:52.202131 kubelet[2346]: I0715 05:07:52.202096 2346 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:07:52.208656 kubelet[2346]: I0715 05:07:52.208598 2346 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:07:52.208977 kubelet[2346]: I0715 05:07:52.208938 2346 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:07:52.209246 kubelet[2346]: I0715 05:07:52.208968 2346 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:07:52.209423 kubelet[2346]: I0715 05:07:52.209255 2346 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:07:52.209423 kubelet[2346]: I0715 05:07:52.209268 2346 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:07:52.209487 kubelet[2346]: I0715 05:07:52.209456 2346 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:07:52.214422 kubelet[2346]: I0715 05:07:52.214381 2346 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:07:52.214422 kubelet[2346]: I0715 05:07:52.214409 2346 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:07:52.214515 kubelet[2346]: I0715 05:07:52.214454 2346 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:07:52.214515 kubelet[2346]: I0715 05:07:52.214476 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:07:52.222084 kubelet[2346]: I0715 05:07:52.221885 2346 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:07:52.222190 kubelet[2346]: E0715 05:07:52.222147 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 05:07:52.222248 kubelet[2346]: E0715 05:07:52.222213 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:07:52.222582 kubelet[2346]: I0715 05:07:52.222551 2346 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:07:52.223263 kubelet[2346]: W0715 05:07:52.223217 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:07:52.227935 kubelet[2346]: I0715 05:07:52.227898 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:07:52.228090 kubelet[2346]: I0715 05:07:52.227952 2346 server.go:1289] "Started kubelet" Jul 15 05:07:52.228833 kubelet[2346]: I0715 05:07:52.228775 2346 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:07:52.230596 kubelet[2346]: I0715 05:07:52.230074 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:07:52.231288 kubelet[2346]: I0715 05:07:52.231271 2346 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:07:52.232760 kubelet[2346]: I0715 05:07:52.232658 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:07:52.233110 kubelet[2346]: I0715 05:07:52.233089 2346 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:07:52.233859 kubelet[2346]: I0715 05:07:52.233737 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:07:52.236290 kubelet[2346]: E0715 05:07:52.236267 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.236366 kubelet[2346]: I0715 05:07:52.236299 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:07:52.236712 kubelet[2346]: I0715 05:07:52.236489 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:07:52.236712 kubelet[2346]: I0715 05:07:52.236543 2346 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:07:52.237352 kubelet[2346]: E0715 05:07:52.237308 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Jul 15 05:07:52.237552 kubelet[2346]: I0715 05:07:52.237524 2346 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:07:52.238205 kubelet[2346]: E0715 05:07:52.237427 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:07:52.238660 kubelet[2346]: E0715 05:07:52.236919 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852547656195fda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:07:52.227921882 +0000 UTC m=+0.392084579,LastTimestamp:2025-07-15 05:07:52.227921882 +0000 UTC m=+0.392084579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:07:52.238660 kubelet[2346]: E0715 05:07:52.238632 2346 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:07:52.239496 kubelet[2346]: I0715 05:07:52.239476 2346 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:07:52.239496 kubelet[2346]: I0715 05:07:52.239489 2346 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:07:52.256507 kubelet[2346]: I0715 05:07:52.256411 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:07:52.256507 kubelet[2346]: I0715 05:07:52.256499 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:07:52.256507 kubelet[2346]: I0715 05:07:52.256520 2346 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:07:52.260560 kubelet[2346]: I0715 05:07:52.260467 2346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:07:52.262128 kubelet[2346]: I0715 05:07:52.262105 2346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:07:52.262275 kubelet[2346]: I0715 05:07:52.262263 2346 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:07:52.263075 kubelet[2346]: I0715 05:07:52.262457 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:07:52.263075 kubelet[2346]: I0715 05:07:52.262473 2346 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:07:52.263075 kubelet[2346]: E0715 05:07:52.262523 2346 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:07:52.263075 kubelet[2346]: E0715 05:07:52.263041 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 05:07:52.337091 kubelet[2346]: E0715 05:07:52.337039 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.363347 kubelet[2346]: E0715 05:07:52.363280 2346 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:07:52.437669 kubelet[2346]: E0715 05:07:52.437556 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.438034 kubelet[2346]: E0715 05:07:52.438006 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Jul 15 05:07:52.538472 kubelet[2346]: E0715 05:07:52.538411 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.563691 kubelet[2346]: E0715 05:07:52.563635 2346 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:07:52.639221 kubelet[2346]: E0715 05:07:52.639156 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.739489 kubelet[2346]: E0715 05:07:52.739315 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.838548 kubelet[2346]: E0715 05:07:52.838479 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Jul 15 05:07:52.839534 kubelet[2346]: E0715 05:07:52.839511 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.940641 kubelet[2346]: E0715 05:07:52.940552 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:52.964649 kubelet[2346]: E0715 05:07:52.964572 2346 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:07:53.041215 kubelet[2346]: E0715 05:07:53.041131 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.141838 kubelet[2346]: E0715 05:07:53.141745 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.242534 kubelet[2346]: E0715 05:07:53.242449 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.329352 kubelet[2346]: E0715 05:07:53.329186 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 05:07:53.342804 kubelet[2346]: E0715 05:07:53.342756 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.352343 kubelet[2346]: E0715 05:07:53.352309 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:07:53.387071 kubelet[2346]: E0715 05:07:53.387018 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 05:07:53.443673 kubelet[2346]: E0715 05:07:53.443625 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.544266 kubelet[2346]: E0715 05:07:53.544201 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.639317 kubelet[2346]: E0715 05:07:53.639130 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Jul 15 05:07:53.644323 kubelet[2346]: E0715 05:07:53.644288 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.730976 kubelet[2346]: I0715 05:07:53.730912 2346 policy_none.go:49] "None policy: Start" Jul 15 05:07:53.730976 kubelet[2346]: I0715 05:07:53.730965 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:07:53.730976 kubelet[2346]: I0715 05:07:53.730985 2346 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:07:53.744486 kubelet[2346]: E0715 05:07:53.744404 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:07:53.765775 kubelet[2346]: E0715 05:07:53.765707 2346 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:07:53.773464 kubelet[2346]: E0715 05:07:53.773426 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:07:53.784503 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:07:53.804816 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:07:53.808528 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:07:53.823503 kubelet[2346]: E0715 05:07:53.823416 2346 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:07:53.823739 kubelet[2346]: I0715 05:07:53.823718 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:07:53.823785 kubelet[2346]: I0715 05:07:53.823735 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:07:53.824139 kubelet[2346]: I0715 05:07:53.824113 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:07:53.825366 kubelet[2346]: E0715 05:07:53.825341 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:07:53.825445 kubelet[2346]: E0715 05:07:53.825410 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 05:07:53.926576 kubelet[2346]: I0715 05:07:53.926388 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:07:53.926988 kubelet[2346]: E0715 05:07:53.926912 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jul 15 05:07:54.128553 kubelet[2346]: I0715 05:07:54.128503 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:07:54.128986 kubelet[2346]: E0715 05:07:54.128876 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jul 15 05:07:54.277893 kubelet[2346]: E0715 05:07:54.277840 2346 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 05:07:54.530870 kubelet[2346]: I0715 05:07:54.530736 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:07:54.531158 kubelet[2346]: E0715 05:07:54.531093 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jul 15 05:07:55.240684 kubelet[2346]: E0715 05:07:55.240642 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="3.2s" Jul 15 05:07:55.333254 kubelet[2346]: I0715 05:07:55.333175 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:07:55.333564 kubelet[2346]: E0715 05:07:55.333532 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Jul 15 05:07:55.452652 kubelet[2346]: I0715 05:07:55.452596 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:55.452652 kubelet[2346]: I0715 05:07:55.452635 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:55.452652 kubelet[2346]: I0715 05:07:55.452653 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:55.452652 kubelet[2346]: I0715 05:07:55.452672 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:55.452898 kubelet[2346]: I0715 05:07:55.452687 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:55.463592 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 15 05:07:55.485180 kubelet[2346]: E0715 05:07:55.485136 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:55.510189 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 15 05:07:55.511999 kubelet[2346]: E0715 05:07:55.511979 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:55.553409 kubelet[2346]: I0715 05:07:55.553368 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:07:55.553555 kubelet[2346]: I0715 05:07:55.553408 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:07:55.553555 kubelet[2346]: I0715 05:07:55.553444 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:07:55.553632 kubelet[2346]: I0715 05:07:55.553569 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:07:55.583361 systemd[1]: Created slice kubepods-burstable-pod4329b442739d19012cecd665cdd43150.slice - libcontainer container kubepods-burstable-pod4329b442739d19012cecd665cdd43150.slice. Jul 15 05:07:55.585352 kubelet[2346]: E0715 05:07:55.585304 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:55.632983 kubelet[2346]: E0715 05:07:55.632931 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 05:07:55.786789 kubelet[2346]: E0715 05:07:55.786565 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:55.787640 containerd[1563]: time="2025-07-15T05:07:55.787579543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 15 05:07:55.791744 kubelet[2346]: E0715 05:07:55.791641 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 05:07:55.813517 kubelet[2346]: E0715 05:07:55.813456 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:55.814114 containerd[1563]: time="2025-07-15T05:07:55.814061798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 15 05:07:55.818857 containerd[1563]: time="2025-07-15T05:07:55.818814374Z" level=info msg="connecting to shim cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781" address="unix:///run/containerd/s/dc5647ee51ff0e2089bccd673c2887605d16bfb35a65156b16ea734561ae8481" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:07:55.889356 kubelet[2346]: E0715 05:07:55.889287 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:55.890070 containerd[1563]: time="2025-07-15T05:07:55.890029544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4329b442739d19012cecd665cdd43150,Namespace:kube-system,Attempt:0,}" Jul 15 05:07:55.900507 containerd[1563]: time="2025-07-15T05:07:55.900399735Z" level=info msg="connecting to shim efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e" address="unix:///run/containerd/s/681bdfd9070c9c10d2e42d72bf2a9a1199a6cca98db503958eed0fa20e7b45df" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:07:55.942416 systemd[1]: Started cri-containerd-cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781.scope - libcontainer container cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781. Jul 15 05:07:55.943168 containerd[1563]: time="2025-07-15T05:07:55.943084157Z" level=info msg="connecting to shim d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41" address="unix:///run/containerd/s/b0952d7d0b4d524036517db85f779d88751e82d4361a2e6fa5f44942c50389c5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:07:55.947153 systemd[1]: Started cri-containerd-efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e.scope - libcontainer container efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e. Jul 15 05:07:56.010410 systemd[1]: Started cri-containerd-d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41.scope - libcontainer container d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41. Jul 15 05:07:56.017969 containerd[1563]: time="2025-07-15T05:07:56.017915731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781\"" Jul 15 05:07:56.019980 kubelet[2346]: E0715 05:07:56.019935 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.028269 containerd[1563]: time="2025-07-15T05:07:56.028207225Z" level=info msg="CreateContainer within sandbox \"cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:07:56.039722 containerd[1563]: time="2025-07-15T05:07:56.039509984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e\"" Jul 15 05:07:56.040987 kubelet[2346]: E0715 05:07:56.040936 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.042573 containerd[1563]: time="2025-07-15T05:07:56.042515410Z" level=info msg="Container a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:07:56.047343 containerd[1563]: time="2025-07-15T05:07:56.047302065Z" level=info msg="CreateContainer within sandbox \"efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:07:56.058422 containerd[1563]: time="2025-07-15T05:07:56.058358853Z" level=info msg="CreateContainer within sandbox \"cbe25b9549c5709478eaf684fc227955397ccf17994fa8030dcfcacf717a2781\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5\"" Jul 15 05:07:56.059773 containerd[1563]: time="2025-07-15T05:07:56.059627544Z" level=info msg="StartContainer for \"a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5\"" Jul 15 05:07:56.061132 containerd[1563]: time="2025-07-15T05:07:56.061015111Z" level=info msg="connecting to shim a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5" address="unix:///run/containerd/s/dc5647ee51ff0e2089bccd673c2887605d16bfb35a65156b16ea734561ae8481" protocol=ttrpc version=3 Jul 15 05:07:56.063319 containerd[1563]: time="2025-07-15T05:07:56.063280240Z" level=info msg="Container 3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:07:56.075563 containerd[1563]: time="2025-07-15T05:07:56.075494673Z" level=info msg="CreateContainer within sandbox \"efe661454022215b6ceb867761290a500de34e0b08f59f85cb73f91f2549ab1e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e\"" Jul 15 05:07:56.076171 containerd[1563]: time="2025-07-15T05:07:56.076126988Z" level=info msg="StartContainer for \"3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e\"" Jul 15 05:07:56.076829 containerd[1563]: time="2025-07-15T05:07:56.076776073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4329b442739d19012cecd665cdd43150,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41\"" Jul 15 05:07:56.077632 containerd[1563]: time="2025-07-15T05:07:56.077598168Z" level=info msg="connecting to shim 3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e" address="unix:///run/containerd/s/681bdfd9070c9c10d2e42d72bf2a9a1199a6cca98db503958eed0fa20e7b45df" protocol=ttrpc version=3 Jul 15 05:07:56.078331 kubelet[2346]: E0715 05:07:56.078297 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.084680 containerd[1563]: time="2025-07-15T05:07:56.084641012Z" level=info msg="CreateContainer within sandbox \"d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:07:56.088620 systemd[1]: Started cri-containerd-a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5.scope - libcontainer container a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5. Jul 15 05:07:56.096137 containerd[1563]: time="2025-07-15T05:07:56.095458706Z" level=info msg="Container 41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:07:56.108562 containerd[1563]: time="2025-07-15T05:07:56.108523089Z" level=info msg="CreateContainer within sandbox \"d6030dc2c30bcbdb093384012cef2ecc9b5348ccb0ca07e27344e3f9b46a5b41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc\"" Jul 15 05:07:56.109206 containerd[1563]: time="2025-07-15T05:07:56.109186286Z" level=info msg="StartContainer for \"41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc\"" Jul 15 05:07:56.110293 containerd[1563]: time="2025-07-15T05:07:56.110271472Z" level=info msg="connecting to shim 41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc" address="unix:///run/containerd/s/b0952d7d0b4d524036517db85f779d88751e82d4361a2e6fa5f44942c50389c5" protocol=ttrpc version=3 Jul 15 05:07:56.110431 systemd[1]: Started cri-containerd-3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e.scope - libcontainer container 3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e. Jul 15 05:07:56.141436 systemd[1]: Started cri-containerd-41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc.scope - libcontainer container 41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc. Jul 15 05:07:56.170206 containerd[1563]: time="2025-07-15T05:07:56.170146936Z" level=info msg="StartContainer for \"a8a786cd3bb1b0113b82d890de49f1a04f9a71a29669028495dcabedd3ceadb5\" returns successfully" Jul 15 05:07:56.223412 containerd[1563]: time="2025-07-15T05:07:56.222669417Z" level=info msg="StartContainer for \"41a2f044fa5f33635b776c4a23de4c24844d69ff343c51b8cb86249e1b1310fc\" returns successfully" Jul 15 05:07:56.223677 containerd[1563]: time="2025-07-15T05:07:56.222802577Z" level=info msg="StartContainer for \"3272d3c4bc641e1c9e5018e35c138707492dba03735b8fc2f477eee6f7ba2c0e\" returns successfully" Jul 15 05:07:56.280983 kubelet[2346]: E0715 05:07:56.280934 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:56.281455 kubelet[2346]: E0715 05:07:56.281204 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.283252 kubelet[2346]: E0715 05:07:56.281869 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:56.283252 kubelet[2346]: E0715 05:07:56.281975 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.285352 kubelet[2346]: E0715 05:07:56.285321 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:56.285458 kubelet[2346]: E0715 05:07:56.285436 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:56.936688 kubelet[2346]: I0715 05:07:56.936649 2346 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:07:57.288606 kubelet[2346]: E0715 05:07:57.288554 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:57.289379 kubelet[2346]: E0715 05:07:57.289362 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:57.289670 kubelet[2346]: E0715 05:07:57.289573 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:57.289967 kubelet[2346]: E0715 05:07:57.289872 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:57.290656 kubelet[2346]: E0715 05:07:57.290625 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 05:07:57.290910 kubelet[2346]: E0715 05:07:57.290897 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:07:57.666965 kubelet[2346]: I0715 05:07:57.666665 2346 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:07:57.666965 kubelet[2346]: E0715 05:07:57.666711 2346 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 05:07:57.737400 kubelet[2346]: I0715 05:07:57.737348 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:07:57.743528 kubelet[2346]: E0715 05:07:57.743492 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 05:07:57.743528 kubelet[2346]: I0715 05:07:57.743519 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:57.744962 kubelet[2346]: E0715 05:07:57.744940 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:07:57.744962 kubelet[2346]: I0715 05:07:57.744960 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:07:57.746107 kubelet[2346]: E0715 05:07:57.746080 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 05:07:58.225055 kubelet[2346]: I0715 05:07:58.224997 2346 apiserver.go:52] "Watching apiserver" Jul 15 05:07:58.237632 kubelet[2346]: I0715 05:07:58.237593 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:07:58.289170 kubelet[2346]: I0715 05:07:58.289140 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:07:58.294274 kubelet[2346]: E0715 05:07:58.294217 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 05:07:58.294493 kubelet[2346]: E0715 05:07:58.294418 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:00.513023 systemd[1]: Reload requested from client PID 2636 ('systemctl') (unit session-7.scope)... Jul 15 05:08:00.513038 systemd[1]: Reloading... Jul 15 05:08:00.588338 zram_generator::config[2679]: No configuration found. Jul 15 05:08:00.857730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:08:00.999528 systemd[1]: Reloading finished in 486 ms. Jul 15 05:08:01.030441 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:08:01.049013 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:08:01.049491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:08:01.049565 systemd[1]: kubelet.service: Consumed 1.024s CPU time, 132.1M memory peak. Jul 15 05:08:01.052116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:08:01.275409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:08:01.298807 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:08:01.444756 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:08:01.444756 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 05:08:01.444756 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:08:01.445195 kubelet[2724]: I0715 05:08:01.444793 2724 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:08:01.452939 kubelet[2724]: I0715 05:08:01.452886 2724 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 05:08:01.452939 kubelet[2724]: I0715 05:08:01.452921 2724 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:08:01.453211 kubelet[2724]: I0715 05:08:01.453182 2724 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 05:08:01.454725 kubelet[2724]: I0715 05:08:01.454692 2724 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 05:08:01.457056 kubelet[2724]: I0715 05:08:01.457004 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:08:01.486440 kubelet[2724]: I0715 05:08:01.486392 2724 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:08:01.493298 kubelet[2724]: I0715 05:08:01.492473 2724 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:08:01.493298 kubelet[2724]: I0715 05:08:01.492838 2724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:08:01.493298 kubelet[2724]: I0715 05:08:01.492889 2724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:08:01.493298 kubelet[2724]: I0715 05:08:01.493210 2724 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:08:01.493610 kubelet[2724]: I0715 05:08:01.493260 2724 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 05:08:01.493610 kubelet[2724]: I0715 05:08:01.493311 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:08:01.494150 kubelet[2724]: I0715 05:08:01.493699 2724 kubelet.go:480] "Attempting to sync node with API server" Jul 15 05:08:01.494150 kubelet[2724]: I0715 05:08:01.493722 2724 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:08:01.494150 kubelet[2724]: I0715 05:08:01.493747 2724 kubelet.go:386] "Adding apiserver pod source" Jul 15 05:08:01.494150 kubelet[2724]: I0715 05:08:01.493787 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:08:01.495451 kubelet[2724]: I0715 05:08:01.495430 2724 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:08:01.495948 kubelet[2724]: I0715 05:08:01.495887 2724 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 05:08:01.499906 kubelet[2724]: I0715 05:08:01.499858 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 05:08:01.500999 kubelet[2724]: I0715 05:08:01.500413 2724 server.go:1289] "Started kubelet" Jul 15 05:08:01.507032 kubelet[2724]: I0715 05:08:01.507001 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:08:01.508662 kubelet[2724]: I0715 05:08:01.508598 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 05:08:01.509247 kubelet[2724]: E0715 05:08:01.509092 2724 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:08:01.509710 kubelet[2724]: I0715 05:08:01.509543 2724 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 05:08:01.510374 kubelet[2724]: I0715 05:08:01.509123 2724 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:08:01.511709 kubelet[2724]: I0715 05:08:01.510730 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:08:01.513275 kubelet[2724]: I0715 05:08:01.513117 2724 server.go:317] "Adding debug handlers to kubelet server" Jul 15 05:08:01.513513 kubelet[2724]: I0715 05:08:01.513481 2724 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:08:01.513752 kubelet[2724]: I0715 05:08:01.510339 2724 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:08:01.513886 kubelet[2724]: I0715 05:08:01.513859 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:08:01.514620 kubelet[2724]: I0715 05:08:01.514544 2724 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:08:01.516681 kubelet[2724]: I0715 05:08:01.516651 2724 factory.go:223] Registration of the containerd container factory successfully Jul 15 05:08:01.516681 kubelet[2724]: I0715 05:08:01.516674 2724 factory.go:223] Registration of the systemd container factory successfully Jul 15 05:08:01.520529 kubelet[2724]: E0715 05:08:01.520021 2724 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:08:01.536899 kubelet[2724]: I0715 05:08:01.536452 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 05:08:01.540251 kubelet[2724]: I0715 05:08:01.540009 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 05:08:01.540251 kubelet[2724]: I0715 05:08:01.540033 2724 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 05:08:01.540251 kubelet[2724]: I0715 05:08:01.540058 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 05:08:01.540251 kubelet[2724]: I0715 05:08:01.540067 2724 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 05:08:01.540251 kubelet[2724]: E0715 05:08:01.540112 2724 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:08:01.571677 kubelet[2724]: I0715 05:08:01.571638 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 05:08:01.571677 kubelet[2724]: I0715 05:08:01.571661 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 05:08:01.571677 kubelet[2724]: I0715 05:08:01.571685 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:08:01.571891 kubelet[2724]: I0715 05:08:01.571839 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:08:01.571891 kubelet[2724]: I0715 05:08:01.571855 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:08:01.571891 kubelet[2724]: I0715 05:08:01.571877 2724 policy_none.go:49] "None policy: Start" Jul 15 05:08:01.571891 kubelet[2724]: I0715 05:08:01.571888 2724 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 05:08:01.571991 kubelet[2724]: I0715 05:08:01.571901 2724 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:08:01.572014 kubelet[2724]: I0715 05:08:01.572006 2724 state_mem.go:75] "Updated machine memory state" Jul 15 05:08:01.578630 kubelet[2724]: E0715 05:08:01.577901 2724 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 05:08:01.578630 kubelet[2724]: I0715 05:08:01.578345 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:08:01.578630 kubelet[2724]: I0715 05:08:01.578361 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:08:01.578630 kubelet[2724]: I0715 05:08:01.578633 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:08:01.580306 kubelet[2724]: E0715 05:08:01.580274 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 05:08:01.641722 kubelet[2724]: I0715 05:08:01.641664 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.641917 kubelet[2724]: I0715 05:08:01.641875 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:01.642041 kubelet[2724]: I0715 05:08:01.641972 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.684141 kubelet[2724]: I0715 05:08:01.684094 2724 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 05:08:01.702954 kubelet[2724]: I0715 05:08:01.702895 2724 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 05:08:01.703130 kubelet[2724]: I0715 05:08:01.703009 2724 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 05:08:01.715457 kubelet[2724]: I0715 05:08:01.715146 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.715457 kubelet[2724]: I0715 05:08:01.715290 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.715457 kubelet[2724]: I0715 05:08:01.715327 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.715457 kubelet[2724]: I0715 05:08:01.715345 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.715457 kubelet[2724]: I0715 05:08:01.715363 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.715768 kubelet[2724]: I0715 05:08:01.715382 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:01.715768 kubelet[2724]: I0715 05:08:01.715398 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:01.715768 kubelet[2724]: I0715 05:08:01.715420 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.715768 kubelet[2724]: I0715 05:08:01.715453 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4329b442739d19012cecd665cdd43150-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4329b442739d19012cecd665cdd43150\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:01.951883 kubelet[2724]: E0715 05:08:01.951723 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:01.953819 kubelet[2724]: E0715 05:08:01.953730 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:01.953819 kubelet[2724]: E0715 05:08:01.953776 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.494680 kubelet[2724]: I0715 05:08:02.494620 2724 apiserver.go:52] "Watching apiserver" Jul 15 05:08:02.509849 kubelet[2724]: I0715 05:08:02.509776 2724 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 05:08:02.557906 kubelet[2724]: I0715 05:08:02.557862 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:02.558121 kubelet[2724]: I0715 05:08:02.558087 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:02.558279 kubelet[2724]: I0715 05:08:02.558259 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:02.567352 kubelet[2724]: E0715 05:08:02.567286 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 05:08:02.568551 kubelet[2724]: E0715 05:08:02.568460 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.568551 kubelet[2724]: E0715 05:08:02.568530 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:08:02.568852 kubelet[2724]: E0715 05:08:02.568744 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.569025 kubelet[2724]: E0715 05:08:02.568978 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 05:08:02.569331 kubelet[2724]: E0715 05:08:02.569308 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:02.619428 kubelet[2724]: I0715 05:08:02.619352 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6193221850000001 podStartE2EDuration="1.619322185s" podCreationTimestamp="2025-07-15 05:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:02.607767136 +0000 UTC m=+1.303823769" watchObservedRunningTime="2025-07-15 05:08:02.619322185 +0000 UTC m=+1.315378798" Jul 15 05:08:02.629938 kubelet[2724]: I0715 05:08:02.629811 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6297416789999999 podStartE2EDuration="1.629741679s" podCreationTimestamp="2025-07-15 05:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:02.619609515 +0000 UTC m=+1.315666128" watchObservedRunningTime="2025-07-15 05:08:02.629741679 +0000 UTC m=+1.325798292" Jul 15 05:08:02.641656 kubelet[2724]: I0715 05:08:02.641586 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6415662100000001 podStartE2EDuration="1.64156621s" podCreationTimestamp="2025-07-15 05:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:02.630021584 +0000 UTC m=+1.326078197" watchObservedRunningTime="2025-07-15 05:08:02.64156621 +0000 UTC m=+1.337622823" Jul 15 05:08:03.559958 kubelet[2724]: E0715 05:08:03.559900 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:03.560486 kubelet[2724]: E0715 05:08:03.560467 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:03.560709 kubelet[2724]: E0715 05:08:03.560682 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.561939 kubelet[2724]: E0715 05:08:04.561892 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.561939 kubelet[2724]: E0715 05:08:04.561893 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:04.748879 update_engine[1527]: I20250715 05:08:04.748770 1527 update_attempter.cc:509] Updating boot flags... Jul 15 05:08:05.016170 kubelet[2724]: I0715 05:08:05.016118 2724 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:08:05.016658 containerd[1563]: time="2025-07-15T05:08:05.016614075Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:08:05.016988 kubelet[2724]: I0715 05:08:05.016869 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:08:05.080722 systemd[1]: Created slice kubepods-besteffort-pod638af3d9_d0be_4810_a66c_f7f0d6f6cf1a.slice - libcontainer container kubepods-besteffort-pod638af3d9_d0be_4810_a66c_f7f0d6f6cf1a.slice. Jul 15 05:08:05.140516 kubelet[2724]: I0715 05:08:05.140454 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-xtables-lock\") pod \"kube-proxy-llk84\" (UID: \"638af3d9-d0be-4810-a66c-f7f0d6f6cf1a\") " pod="kube-system/kube-proxy-llk84" Jul 15 05:08:05.140673 kubelet[2724]: I0715 05:08:05.140514 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqgsm\" (UniqueName: \"kubernetes.io/projected/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-api-access-jqgsm\") pod \"kube-proxy-llk84\" (UID: \"638af3d9-d0be-4810-a66c-f7f0d6f6cf1a\") " pod="kube-system/kube-proxy-llk84" Jul 15 05:08:05.140673 kubelet[2724]: I0715 05:08:05.140571 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-proxy\") pod \"kube-proxy-llk84\" (UID: \"638af3d9-d0be-4810-a66c-f7f0d6f6cf1a\") " pod="kube-system/kube-proxy-llk84" Jul 15 05:08:05.140673 kubelet[2724]: I0715 05:08:05.140594 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-lib-modules\") pod \"kube-proxy-llk84\" (UID: \"638af3d9-d0be-4810-a66c-f7f0d6f6cf1a\") " pod="kube-system/kube-proxy-llk84" Jul 15 05:08:05.383391 kubelet[2724]: E0715 05:08:05.383143 2724 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 05:08:05.383391 kubelet[2724]: E0715 05:08:05.383270 2724 projected.go:194] Error preparing data for projected volume kube-api-access-jqgsm for pod kube-system/kube-proxy-llk84: configmap "kube-root-ca.crt" not found Jul 15 05:08:05.383391 kubelet[2724]: E0715 05:08:05.383459 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-api-access-jqgsm podName:638af3d9-d0be-4810-a66c-f7f0d6f6cf1a nodeName:}" failed. No retries permitted until 2025-07-15 05:08:05.883433081 +0000 UTC m=+4.579489694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jqgsm" (UniqueName: "kubernetes.io/projected/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-api-access-jqgsm") pod "kube-proxy-llk84" (UID: "638af3d9-d0be-4810-a66c-f7f0d6f6cf1a") : configmap "kube-root-ca.crt" not found Jul 15 05:08:05.563764 kubelet[2724]: E0715 05:08:05.563731 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:05.721480 kubelet[2724]: E0715 05:08:05.721353 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:05.945986 kubelet[2724]: E0715 05:08:05.945941 2724 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 05:08:05.945986 kubelet[2724]: E0715 05:08:05.945969 2724 projected.go:194] Error preparing data for projected volume kube-api-access-jqgsm for pod kube-system/kube-proxy-llk84: configmap "kube-root-ca.crt" not found Jul 15 05:08:05.946300 kubelet[2724]: E0715 05:08:05.946024 2724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-api-access-jqgsm podName:638af3d9-d0be-4810-a66c-f7f0d6f6cf1a nodeName:}" failed. No retries permitted until 2025-07-15 05:08:06.946006912 +0000 UTC m=+5.642063525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jqgsm" (UniqueName: "kubernetes.io/projected/638af3d9-d0be-4810-a66c-f7f0d6f6cf1a-kube-api-access-jqgsm") pod "kube-proxy-llk84" (UID: "638af3d9-d0be-4810-a66c-f7f0d6f6cf1a") : configmap "kube-root-ca.crt" not found Jul 15 05:08:06.300677 systemd[1]: Created slice kubepods-besteffort-podbed0d4ff_88ee_4900_8238_9ed42e6772e2.slice - libcontainer container kubepods-besteffort-podbed0d4ff_88ee_4900_8238_9ed42e6772e2.slice. Jul 15 05:08:06.347662 kubelet[2724]: I0715 05:08:06.347579 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bed0d4ff-88ee-4900-8238-9ed42e6772e2-var-lib-calico\") pod \"tigera-operator-747864d56d-n2r2q\" (UID: \"bed0d4ff-88ee-4900-8238-9ed42e6772e2\") " pod="tigera-operator/tigera-operator-747864d56d-n2r2q" Jul 15 05:08:06.347662 kubelet[2724]: I0715 05:08:06.347671 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp875\" (UniqueName: \"kubernetes.io/projected/bed0d4ff-88ee-4900-8238-9ed42e6772e2-kube-api-access-tp875\") pod \"tigera-operator-747864d56d-n2r2q\" (UID: \"bed0d4ff-88ee-4900-8238-9ed42e6772e2\") " pod="tigera-operator/tigera-operator-747864d56d-n2r2q" Jul 15 05:08:06.607103 containerd[1563]: time="2025-07-15T05:08:06.606968270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-n2r2q,Uid:bed0d4ff-88ee-4900-8238-9ed42e6772e2,Namespace:tigera-operator,Attempt:0,}" Jul 15 05:08:07.192087 kubelet[2724]: E0715 05:08:07.192016 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:07.192787 containerd[1563]: time="2025-07-15T05:08:07.192733422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llk84,Uid:638af3d9-d0be-4810-a66c-f7f0d6f6cf1a,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:07.575387 containerd[1563]: time="2025-07-15T05:08:07.575313126Z" level=info msg="connecting to shim 14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788" address="unix:///run/containerd/s/83f996de8802e77122692a95ec350a301587f99c6a73bf9b10d06e7904318519" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:07.611403 systemd[1]: Started cri-containerd-14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788.scope - libcontainer container 14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788. Jul 15 05:08:07.774519 containerd[1563]: time="2025-07-15T05:08:07.774459996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-n2r2q,Uid:bed0d4ff-88ee-4900-8238-9ed42e6772e2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788\"" Jul 15 05:08:07.776162 containerd[1563]: time="2025-07-15T05:08:07.776121415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 05:08:07.986250 containerd[1563]: time="2025-07-15T05:08:07.986093268Z" level=info msg="connecting to shim d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a" address="unix:///run/containerd/s/97ad1cfaf890b60d0601a871ed9a2665299a7335954350f525f56bfe88bef039" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:08.016433 systemd[1]: Started cri-containerd-d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a.scope - libcontainer container d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a. Jul 15 05:08:08.103302 containerd[1563]: time="2025-07-15T05:08:08.103259533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llk84,Uid:638af3d9-d0be-4810-a66c-f7f0d6f6cf1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a\"" Jul 15 05:08:08.103933 kubelet[2724]: E0715 05:08:08.103886 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:08.205979 containerd[1563]: time="2025-07-15T05:08:08.205914721Z" level=info msg="CreateContainer within sandbox \"d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:08:08.312969 containerd[1563]: time="2025-07-15T05:08:08.312906398Z" level=info msg="Container d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:08.322711 containerd[1563]: time="2025-07-15T05:08:08.322638995Z" level=info msg="CreateContainer within sandbox \"d98abca0774c9670df9addfbfc6d4f55807d16b87ee131a4bb1347163668866a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba\"" Jul 15 05:08:08.323372 containerd[1563]: time="2025-07-15T05:08:08.323309492Z" level=info msg="StartContainer for \"d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba\"" Jul 15 05:08:08.325448 containerd[1563]: time="2025-07-15T05:08:08.325415069Z" level=info msg="connecting to shim d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba" address="unix:///run/containerd/s/97ad1cfaf890b60d0601a871ed9a2665299a7335954350f525f56bfe88bef039" protocol=ttrpc version=3 Jul 15 05:08:08.359722 systemd[1]: Started cri-containerd-d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba.scope - libcontainer container d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba. Jul 15 05:08:08.416324 containerd[1563]: time="2025-07-15T05:08:08.416264049Z" level=info msg="StartContainer for \"d18d50c03fbacf51e3ee994203386adddbf3c1a5e2fafd91047106facc48fcba\" returns successfully" Jul 15 05:08:08.571146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169394798.mount: Deactivated successfully. Jul 15 05:08:08.572169 kubelet[2724]: E0715 05:08:08.572139 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:08.581028 kubelet[2724]: I0715 05:08:08.580977 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-llk84" podStartSLOduration=4.580794294 podStartE2EDuration="4.580794294s" podCreationTimestamp="2025-07-15 05:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:08:08.580354144 +0000 UTC m=+7.276410757" watchObservedRunningTime="2025-07-15 05:08:08.580794294 +0000 UTC m=+7.276851007" Jul 15 05:08:09.549667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942650154.mount: Deactivated successfully. Jul 15 05:08:10.301784 containerd[1563]: time="2025-07-15T05:08:10.301697693Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:10.305390 containerd[1563]: time="2025-07-15T05:08:10.305346974Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 05:08:10.305636 containerd[1563]: time="2025-07-15T05:08:10.305486916Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:10.308107 containerd[1563]: time="2025-07-15T05:08:10.308074577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:10.308857 containerd[1563]: time="2025-07-15T05:08:10.308828484Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.532675342s" Jul 15 05:08:10.308899 containerd[1563]: time="2025-07-15T05:08:10.308861653Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 05:08:10.314804 containerd[1563]: time="2025-07-15T05:08:10.314746668Z" level=info msg="CreateContainer within sandbox \"14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 05:08:10.325464 containerd[1563]: time="2025-07-15T05:08:10.325401017Z" level=info msg="Container e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:10.335138 containerd[1563]: time="2025-07-15T05:08:10.334971518Z" level=info msg="CreateContainer within sandbox \"14670ca28d3ed6550838be823932b48b562a6e3cc95111873b5a8886784ba788\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da\"" Jul 15 05:08:10.335876 containerd[1563]: time="2025-07-15T05:08:10.335796814Z" level=info msg="StartContainer for \"e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da\"" Jul 15 05:08:10.337153 containerd[1563]: time="2025-07-15T05:08:10.337114680Z" level=info msg="connecting to shim e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da" address="unix:///run/containerd/s/83f996de8802e77122692a95ec350a301587f99c6a73bf9b10d06e7904318519" protocol=ttrpc version=3 Jul 15 05:08:10.400849 systemd[1]: Started cri-containerd-e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da.scope - libcontainer container e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da. Jul 15 05:08:10.441387 containerd[1563]: time="2025-07-15T05:08:10.441336176Z" level=info msg="StartContainer for \"e06803670886f9534aeeed49ff88bb27495678ce65f1b968ce07c13eeda470da\" returns successfully" Jul 15 05:08:12.825268 kubelet[2724]: E0715 05:08:12.825166 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:12.840936 kubelet[2724]: I0715 05:08:12.840844 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-n2r2q" podStartSLOduration=4.307091113 podStartE2EDuration="6.840823365s" podCreationTimestamp="2025-07-15 05:08:06 +0000 UTC" firstStartedPulling="2025-07-15 05:08:07.775830116 +0000 UTC m=+6.471886729" lastFinishedPulling="2025-07-15 05:08:10.309562368 +0000 UTC m=+9.005618981" observedRunningTime="2025-07-15 05:08:10.589515878 +0000 UTC m=+9.285572501" watchObservedRunningTime="2025-07-15 05:08:12.840823365 +0000 UTC m=+11.536879978" Jul 15 05:08:13.776673 kubelet[2724]: E0715 05:08:13.776578 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:14.587031 kubelet[2724]: E0715 05:08:14.586980 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:15.893086 kubelet[2724]: E0715 05:08:15.893041 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:18.209248 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 15 05:08:18.211211 sshd[1768]: Connection closed by 10.0.0.1 port 55676 Jul 15 05:08:18.264460 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 15 05:08:18.272119 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:55676.service: Deactivated successfully. Jul 15 05:08:18.275199 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:08:18.275525 systemd[1]: session-7.scope: Consumed 6.777s CPU time, 230.3M memory peak. Jul 15 05:08:18.277325 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:08:18.279432 systemd-logind[1524]: Removed session 7. Jul 15 05:08:29.167461 systemd[1]: Created slice kubepods-besteffort-pod39060b32_02e4_4f5f_bbb6_8e3fbaba4896.slice - libcontainer container kubepods-besteffort-pod39060b32_02e4_4f5f_bbb6_8e3fbaba4896.slice. Jul 15 05:08:29.195753 kubelet[2724]: I0715 05:08:29.195649 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39060b32-02e4-4f5f-bbb6-8e3fbaba4896-tigera-ca-bundle\") pod \"calico-typha-785b946965-fhn9z\" (UID: \"39060b32-02e4-4f5f-bbb6-8e3fbaba4896\") " pod="calico-system/calico-typha-785b946965-fhn9z" Jul 15 05:08:29.195753 kubelet[2724]: I0715 05:08:29.195717 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/39060b32-02e4-4f5f-bbb6-8e3fbaba4896-typha-certs\") pod \"calico-typha-785b946965-fhn9z\" (UID: \"39060b32-02e4-4f5f-bbb6-8e3fbaba4896\") " pod="calico-system/calico-typha-785b946965-fhn9z" Jul 15 05:08:29.196540 kubelet[2724]: I0715 05:08:29.195826 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrbc\" (UniqueName: \"kubernetes.io/projected/39060b32-02e4-4f5f-bbb6-8e3fbaba4896-kube-api-access-9nrbc\") pod \"calico-typha-785b946965-fhn9z\" (UID: \"39060b32-02e4-4f5f-bbb6-8e3fbaba4896\") " pod="calico-system/calico-typha-785b946965-fhn9z" Jul 15 05:08:29.475721 kubelet[2724]: E0715 05:08:29.474590 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:29.475894 containerd[1563]: time="2025-07-15T05:08:29.475778345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785b946965-fhn9z,Uid:39060b32-02e4-4f5f-bbb6-8e3fbaba4896,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:29.923986 systemd[1]: Created slice kubepods-besteffort-pod99e20c6e_4b58_4ce0_a36b_aba4a3706570.slice - libcontainer container kubepods-besteffort-pod99e20c6e_4b58_4ce0_a36b_aba4a3706570.slice. Jul 15 05:08:29.991081 containerd[1563]: time="2025-07-15T05:08:29.990977048Z" level=info msg="connecting to shim e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a" address="unix:///run/containerd/s/06bf5581191df6bac753ac4b02e88f23033fea8ce1e9957e9e41c14e02475fb4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:30.000763 kubelet[2724]: I0715 05:08:30.000673 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-policysync\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000763 kubelet[2724]: I0715 05:08:30.000732 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99e20c6e-4b58-4ce0-a36b-aba4a3706570-tigera-ca-bundle\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000997 kubelet[2724]: I0715 05:08:30.000781 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vmd\" (UniqueName: \"kubernetes.io/projected/99e20c6e-4b58-4ce0-a36b-aba4a3706570-kube-api-access-84vmd\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000997 kubelet[2724]: I0715 05:08:30.000805 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-xtables-lock\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000997 kubelet[2724]: I0715 05:08:30.000887 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-cni-net-dir\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000997 kubelet[2724]: I0715 05:08:30.000944 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/99e20c6e-4b58-4ce0-a36b-aba4a3706570-node-certs\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.000997 kubelet[2724]: I0715 05:08:30.000968 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-var-run-calico\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.001331 kubelet[2724]: I0715 05:08:30.001006 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-cni-log-dir\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.001331 kubelet[2724]: I0715 05:08:30.001042 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-cni-bin-dir\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.001331 kubelet[2724]: I0715 05:08:30.001079 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-lib-modules\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.001331 kubelet[2724]: I0715 05:08:30.001113 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-flexvol-driver-host\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.001331 kubelet[2724]: I0715 05:08:30.001138 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99e20c6e-4b58-4ce0-a36b-aba4a3706570-var-lib-calico\") pod \"calico-node-vlrz2\" (UID: \"99e20c6e-4b58-4ce0-a36b-aba4a3706570\") " pod="calico-system/calico-node-vlrz2" Jul 15 05:08:30.024589 systemd[1]: Started cri-containerd-e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a.scope - libcontainer container e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a. Jul 15 05:08:30.108969 kubelet[2724]: E0715 05:08:30.108895 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.108969 kubelet[2724]: W0715 05:08:30.108918 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.108969 kubelet[2724]: E0715 05:08:30.108946 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.110488 containerd[1563]: time="2025-07-15T05:08:30.110435046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785b946965-fhn9z,Uid:39060b32-02e4-4f5f-bbb6-8e3fbaba4896,Namespace:calico-system,Attempt:0,} returns sandbox id \"e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a\"" Jul 15 05:08:30.111578 kubelet[2724]: E0715 05:08:30.111204 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:30.112182 containerd[1563]: time="2025-07-15T05:08:30.112143676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 05:08:30.491896 kubelet[2724]: E0715 05:08:30.491820 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:30.571282 kubelet[2724]: E0715 05:08:30.571214 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.571282 kubelet[2724]: W0715 05:08:30.571273 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.571443 kubelet[2724]: E0715 05:08:30.571297 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.580509 kubelet[2724]: E0715 05:08:30.580474 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.580509 kubelet[2724]: W0715 05:08:30.580495 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.580630 kubelet[2724]: E0715 05:08:30.580515 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.580789 kubelet[2724]: E0715 05:08:30.580768 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.580789 kubelet[2724]: W0715 05:08:30.580781 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.580853 kubelet[2724]: E0715 05:08:30.580793 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.581010 kubelet[2724]: E0715 05:08:30.580992 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.581010 kubelet[2724]: W0715 05:08:30.581004 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.581076 kubelet[2724]: E0715 05:08:30.581013 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.581296 kubelet[2724]: E0715 05:08:30.581282 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.581296 kubelet[2724]: W0715 05:08:30.581294 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.581366 kubelet[2724]: E0715 05:08:30.581305 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.581523 kubelet[2724]: E0715 05:08:30.581503 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.581523 kubelet[2724]: W0715 05:08:30.581514 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.581582 kubelet[2724]: E0715 05:08:30.581524 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.581697 kubelet[2724]: E0715 05:08:30.581684 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.581697 kubelet[2724]: W0715 05:08:30.581696 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.581753 kubelet[2724]: E0715 05:08:30.581705 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.581872 kubelet[2724]: E0715 05:08:30.581860 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.581872 kubelet[2724]: W0715 05:08:30.581870 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.581921 kubelet[2724]: E0715 05:08:30.581880 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.582059 kubelet[2724]: E0715 05:08:30.582045 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.582100 kubelet[2724]: W0715 05:08:30.582081 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.582100 kubelet[2724]: E0715 05:08:30.582095 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.582302 kubelet[2724]: E0715 05:08:30.582288 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.582302 kubelet[2724]: W0715 05:08:30.582299 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.582368 kubelet[2724]: E0715 05:08:30.582308 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.582498 kubelet[2724]: E0715 05:08:30.582486 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.582528 kubelet[2724]: W0715 05:08:30.582497 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.582528 kubelet[2724]: E0715 05:08:30.582505 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.582673 kubelet[2724]: E0715 05:08:30.582660 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.582673 kubelet[2724]: W0715 05:08:30.582671 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.582716 kubelet[2724]: E0715 05:08:30.582680 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.582857 kubelet[2724]: E0715 05:08:30.582843 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.582857 kubelet[2724]: W0715 05:08:30.582854 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.582928 kubelet[2724]: E0715 05:08:30.582863 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.583057 kubelet[2724]: E0715 05:08:30.583035 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.583057 kubelet[2724]: W0715 05:08:30.583046 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.583057 kubelet[2724]: E0715 05:08:30.583055 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.583260 kubelet[2724]: E0715 05:08:30.583244 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.583260 kubelet[2724]: W0715 05:08:30.583256 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.583334 kubelet[2724]: E0715 05:08:30.583266 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.583470 kubelet[2724]: E0715 05:08:30.583455 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.583508 kubelet[2724]: W0715 05:08:30.583469 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.583508 kubelet[2724]: E0715 05:08:30.583484 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.583684 kubelet[2724]: E0715 05:08:30.583669 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.583684 kubelet[2724]: W0715 05:08:30.583680 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.583755 kubelet[2724]: E0715 05:08:30.583693 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.583907 kubelet[2724]: E0715 05:08:30.583892 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.583907 kubelet[2724]: W0715 05:08:30.583905 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.583972 kubelet[2724]: E0715 05:08:30.583917 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.584139 kubelet[2724]: E0715 05:08:30.584106 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.584139 kubelet[2724]: W0715 05:08:30.584134 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.584213 kubelet[2724]: E0715 05:08:30.584146 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.584387 kubelet[2724]: E0715 05:08:30.584367 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.584387 kubelet[2724]: W0715 05:08:30.584382 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.584572 kubelet[2724]: E0715 05:08:30.584396 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.584728 kubelet[2724]: E0715 05:08:30.584709 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.584728 kubelet[2724]: W0715 05:08:30.584727 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.584823 kubelet[2724]: E0715 05:08:30.584740 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.606249 kubelet[2724]: E0715 05:08:30.606166 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.606249 kubelet[2724]: W0715 05:08:30.606195 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.606431 kubelet[2724]: E0715 05:08:30.606218 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.606431 kubelet[2724]: I0715 05:08:30.606303 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9ac2d87-9174-4659-a668-21a00a83e356-kubelet-dir\") pod \"csi-node-driver-ccf82\" (UID: \"b9ac2d87-9174-4659-a668-21a00a83e356\") " pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:30.606515 kubelet[2724]: E0715 05:08:30.606498 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.606515 kubelet[2724]: W0715 05:08:30.606509 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.606580 kubelet[2724]: E0715 05:08:30.606522 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.606580 kubelet[2724]: I0715 05:08:30.606545 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvnk\" (UniqueName: \"kubernetes.io/projected/b9ac2d87-9174-4659-a668-21a00a83e356-kube-api-access-5tvnk\") pod \"csi-node-driver-ccf82\" (UID: \"b9ac2d87-9174-4659-a668-21a00a83e356\") " pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:30.606787 kubelet[2724]: E0715 05:08:30.606763 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.606787 kubelet[2724]: W0715 05:08:30.606776 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.606787 kubelet[2724]: E0715 05:08:30.606786 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.606996 kubelet[2724]: E0715 05:08:30.606974 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.606996 kubelet[2724]: W0715 05:08:30.606987 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.607071 kubelet[2724]: E0715 05:08:30.606997 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.607223 kubelet[2724]: E0715 05:08:30.607200 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.607223 kubelet[2724]: W0715 05:08:30.607212 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.607223 kubelet[2724]: E0715 05:08:30.607221 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.607350 kubelet[2724]: I0715 05:08:30.607262 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9ac2d87-9174-4659-a668-21a00a83e356-socket-dir\") pod \"csi-node-driver-ccf82\" (UID: \"b9ac2d87-9174-4659-a668-21a00a83e356\") " pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:30.607443 kubelet[2724]: E0715 05:08:30.607427 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.607443 kubelet[2724]: W0715 05:08:30.607440 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.607555 kubelet[2724]: E0715 05:08:30.607449 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.607555 kubelet[2724]: I0715 05:08:30.607469 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b9ac2d87-9174-4659-a668-21a00a83e356-varrun\") pod \"csi-node-driver-ccf82\" (UID: \"b9ac2d87-9174-4659-a668-21a00a83e356\") " pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:30.607670 kubelet[2724]: E0715 05:08:30.607652 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.607670 kubelet[2724]: W0715 05:08:30.607665 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.607789 kubelet[2724]: E0715 05:08:30.607689 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.607887 kubelet[2724]: E0715 05:08:30.607873 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.607887 kubelet[2724]: W0715 05:08:30.607884 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.607961 kubelet[2724]: E0715 05:08:30.607894 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608074 kubelet[2724]: E0715 05:08:30.608060 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.608074 kubelet[2724]: W0715 05:08:30.608069 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.608160 kubelet[2724]: E0715 05:08:30.608077 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608282 kubelet[2724]: E0715 05:08:30.608267 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.608282 kubelet[2724]: W0715 05:08:30.608278 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.608355 kubelet[2724]: E0715 05:08:30.608288 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608488 kubelet[2724]: E0715 05:08:30.608473 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.608520 kubelet[2724]: W0715 05:08:30.608489 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.608520 kubelet[2724]: E0715 05:08:30.608498 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608681 kubelet[2724]: E0715 05:08:30.608658 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.608681 kubelet[2724]: W0715 05:08:30.608670 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.608681 kubelet[2724]: E0715 05:08:30.608680 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608859 kubelet[2724]: E0715 05:08:30.608845 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.608859 kubelet[2724]: W0715 05:08:30.608856 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.608923 kubelet[2724]: E0715 05:08:30.608865 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.608923 kubelet[2724]: I0715 05:08:30.608885 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9ac2d87-9174-4659-a668-21a00a83e356-registration-dir\") pod \"csi-node-driver-ccf82\" (UID: \"b9ac2d87-9174-4659-a668-21a00a83e356\") " pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:30.609104 kubelet[2724]: E0715 05:08:30.609086 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.609104 kubelet[2724]: W0715 05:08:30.609100 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.609169 kubelet[2724]: E0715 05:08:30.609110 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.609345 kubelet[2724]: E0715 05:08:30.609330 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.609345 kubelet[2724]: W0715 05:08:30.609340 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.609424 kubelet[2724]: E0715 05:08:30.609349 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.709517 kubelet[2724]: E0715 05:08:30.709469 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.709517 kubelet[2724]: W0715 05:08:30.709498 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.709517 kubelet[2724]: E0715 05:08:30.709526 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.709779 kubelet[2724]: E0715 05:08:30.709756 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.709779 kubelet[2724]: W0715 05:08:30.709770 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.709862 kubelet[2724]: E0715 05:08:30.709786 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.710069 kubelet[2724]: E0715 05:08:30.710042 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.710069 kubelet[2724]: W0715 05:08:30.710056 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.710069 kubelet[2724]: E0715 05:08:30.710066 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.710340 kubelet[2724]: E0715 05:08:30.710314 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.710340 kubelet[2724]: W0715 05:08:30.710327 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.710340 kubelet[2724]: E0715 05:08:30.710339 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.710578 kubelet[2724]: E0715 05:08:30.710550 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.710578 kubelet[2724]: W0715 05:08:30.710562 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.710578 kubelet[2724]: E0715 05:08:30.710574 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.775613 kubelet[2724]: E0715 05:08:30.775502 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.775613 kubelet[2724]: W0715 05:08:30.775527 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.775613 kubelet[2724]: E0715 05:08:30.775550 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.775811 kubelet[2724]: E0715 05:08:30.775793 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.775811 kubelet[2724]: W0715 05:08:30.775808 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.775872 kubelet[2724]: E0715 05:08:30.775824 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.776062 kubelet[2724]: E0715 05:08:30.776049 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.776062 kubelet[2724]: W0715 05:08:30.776059 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.776118 kubelet[2724]: E0715 05:08:30.776069 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.776301 kubelet[2724]: E0715 05:08:30.776282 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.776301 kubelet[2724]: W0715 05:08:30.776294 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.776421 kubelet[2724]: E0715 05:08:30.776304 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.776531 kubelet[2724]: E0715 05:08:30.776499 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.776531 kubelet[2724]: W0715 05:08:30.776516 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.776531 kubelet[2724]: E0715 05:08:30.776526 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.776831 kubelet[2724]: E0715 05:08:30.776720 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.776831 kubelet[2724]: W0715 05:08:30.776735 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.776831 kubelet[2724]: E0715 05:08:30.776747 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.777060 kubelet[2724]: E0715 05:08:30.777041 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.777060 kubelet[2724]: W0715 05:08:30.777055 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.777141 kubelet[2724]: E0715 05:08:30.777067 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.777320 kubelet[2724]: E0715 05:08:30.777306 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.777320 kubelet[2724]: W0715 05:08:30.777318 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.777537 kubelet[2724]: E0715 05:08:30.777329 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.777537 kubelet[2724]: E0715 05:08:30.777514 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.777537 kubelet[2724]: W0715 05:08:30.777523 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.777537 kubelet[2724]: E0715 05:08:30.777534 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.777744 kubelet[2724]: E0715 05:08:30.777729 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.777846 kubelet[2724]: W0715 05:08:30.777742 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.777846 kubelet[2724]: E0715 05:08:30.777770 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.778040 kubelet[2724]: E0715 05:08:30.777964 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.778040 kubelet[2724]: W0715 05:08:30.777976 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.778040 kubelet[2724]: E0715 05:08:30.777987 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778246 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.778898 kubelet[2724]: W0715 05:08:30.778259 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778270 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778498 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.778898 kubelet[2724]: W0715 05:08:30.778510 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778522 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778685 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.778898 kubelet[2724]: W0715 05:08:30.778694 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778703 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.778898 kubelet[2724]: E0715 05:08:30.778897 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.779160 kubelet[2724]: W0715 05:08:30.778907 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.779160 kubelet[2724]: E0715 05:08:30.778917 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.779160 kubelet[2724]: E0715 05:08:30.779115 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.779251 kubelet[2724]: W0715 05:08:30.779216 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.779285 kubelet[2724]: E0715 05:08:30.779251 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.781675 kubelet[2724]: E0715 05:08:30.779503 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.781675 kubelet[2724]: W0715 05:08:30.779517 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.781675 kubelet[2724]: E0715 05:08:30.779527 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.782560 kubelet[2724]: E0715 05:08:30.782535 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.782646 kubelet[2724]: W0715 05:08:30.782632 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.782714 kubelet[2724]: E0715 05:08:30.782698 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.782985 kubelet[2724]: E0715 05:08:30.782973 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.783043 kubelet[2724]: W0715 05:08:30.783032 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.783134 kubelet[2724]: E0715 05:08:30.783115 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.802113 kubelet[2724]: E0715 05:08:30.802074 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:30.802113 kubelet[2724]: W0715 05:08:30.802099 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:30.802311 kubelet[2724]: E0715 05:08:30.802121 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:30.827476 containerd[1563]: time="2025-07-15T05:08:30.827418544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vlrz2,Uid:99e20c6e-4b58-4ce0-a36b-aba4a3706570,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:31.497038 kubelet[2724]: E0715 05:08:31.496993 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:31.497038 kubelet[2724]: W0715 05:08:31.497019 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:31.497038 kubelet[2724]: E0715 05:08:31.497046 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:32.540498 kubelet[2724]: E0715 05:08:32.540433 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:32.567163 containerd[1563]: time="2025-07-15T05:08:32.567089255Z" level=info msg="connecting to shim d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0" address="unix:///run/containerd/s/ae5a3433fc6dd9d1bd53158a7c7495feb2cf7607b8f0a40e831cccea07c36a3a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:08:32.605615 systemd[1]: Started cri-containerd-d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0.scope - libcontainer container d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0. Jul 15 05:08:32.648586 containerd[1563]: time="2025-07-15T05:08:32.648519711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vlrz2,Uid:99e20c6e-4b58-4ce0-a36b-aba4a3706570,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\"" Jul 15 05:08:34.540751 kubelet[2724]: E0715 05:08:34.540667 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:35.712385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2409250242.mount: Deactivated successfully. Jul 15 05:08:36.534767 containerd[1563]: time="2025-07-15T05:08:36.534673228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:36.536908 containerd[1563]: time="2025-07-15T05:08:36.536796070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 05:08:36.538506 containerd[1563]: time="2025-07-15T05:08:36.538421215Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:36.540610 kubelet[2724]: E0715 05:08:36.540545 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:36.541976 containerd[1563]: time="2025-07-15T05:08:36.541837388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:36.542500 containerd[1563]: time="2025-07-15T05:08:36.542446515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 6.430244483s" Jul 15 05:08:36.542500 containerd[1563]: time="2025-07-15T05:08:36.542486624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 05:08:36.543907 containerd[1563]: time="2025-07-15T05:08:36.543863751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 05:08:36.561159 containerd[1563]: time="2025-07-15T05:08:36.561109573Z" level=info msg="CreateContainer within sandbox \"e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 05:08:36.574513 containerd[1563]: time="2025-07-15T05:08:36.574439660Z" level=info msg="Container 1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:36.589459 containerd[1563]: time="2025-07-15T05:08:36.589386595Z" level=info msg="CreateContainer within sandbox \"e50333d8c8a429b096983ac8fc069763dff32ea608884a13d8b268f6ae41730a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e\"" Jul 15 05:08:36.590146 containerd[1563]: time="2025-07-15T05:08:36.590089146Z" level=info msg="StartContainer for \"1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e\"" Jul 15 05:08:36.591615 containerd[1563]: time="2025-07-15T05:08:36.591577180Z" level=info msg="connecting to shim 1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e" address="unix:///run/containerd/s/06bf5581191df6bac753ac4b02e88f23033fea8ce1e9957e9e41c14e02475fb4" protocol=ttrpc version=3 Jul 15 05:08:36.628801 systemd[1]: Started cri-containerd-1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e.scope - libcontainer container 1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e. Jul 15 05:08:36.700333 containerd[1563]: time="2025-07-15T05:08:36.700282556Z" level=info msg="StartContainer for \"1d55bb9cfd5bd1a8ccff5b675ead2ef80a73f51987965f95451f138c1ffdce6e\" returns successfully" Jul 15 05:08:37.638175 kubelet[2724]: E0715 05:08:37.638130 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:37.735439 kubelet[2724]: E0715 05:08:37.735389 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.735439 kubelet[2724]: W0715 05:08:37.735417 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.735439 kubelet[2724]: E0715 05:08:37.735450 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.735712 kubelet[2724]: E0715 05:08:37.735689 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.735712 kubelet[2724]: W0715 05:08:37.735706 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.735782 kubelet[2724]: E0715 05:08:37.735719 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.736029 kubelet[2724]: E0715 05:08:37.736007 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.736029 kubelet[2724]: W0715 05:08:37.736019 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.736029 kubelet[2724]: E0715 05:08:37.736029 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.736363 kubelet[2724]: E0715 05:08:37.736339 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.736363 kubelet[2724]: W0715 05:08:37.736354 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.736363 kubelet[2724]: E0715 05:08:37.736363 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.736551 kubelet[2724]: E0715 05:08:37.736535 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.736551 kubelet[2724]: W0715 05:08:37.736544 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.736619 kubelet[2724]: E0715 05:08:37.736552 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.736732 kubelet[2724]: E0715 05:08:37.736716 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.736732 kubelet[2724]: W0715 05:08:37.736726 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.736786 kubelet[2724]: E0715 05:08:37.736734 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.736930 kubelet[2724]: E0715 05:08:37.736913 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.736930 kubelet[2724]: W0715 05:08:37.736924 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.736976 kubelet[2724]: E0715 05:08:37.736933 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.737147 kubelet[2724]: E0715 05:08:37.737115 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.737147 kubelet[2724]: W0715 05:08:37.737130 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.737147 kubelet[2724]: E0715 05:08:37.737142 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.737474 kubelet[2724]: E0715 05:08:37.737459 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.737474 kubelet[2724]: W0715 05:08:37.737472 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.737541 kubelet[2724]: E0715 05:08:37.737483 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.737712 kubelet[2724]: E0715 05:08:37.737681 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.737712 kubelet[2724]: W0715 05:08:37.737694 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.737712 kubelet[2724]: E0715 05:08:37.737704 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.737944 kubelet[2724]: E0715 05:08:37.737881 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.737944 kubelet[2724]: W0715 05:08:37.737891 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.737944 kubelet[2724]: E0715 05:08:37.737901 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.738110 kubelet[2724]: E0715 05:08:37.738093 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.738110 kubelet[2724]: W0715 05:08:37.738105 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.738159 kubelet[2724]: E0715 05:08:37.738113 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.738369 kubelet[2724]: E0715 05:08:37.738349 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.738369 kubelet[2724]: W0715 05:08:37.738365 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.738444 kubelet[2724]: E0715 05:08:37.738377 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.738659 kubelet[2724]: E0715 05:08:37.738626 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.738659 kubelet[2724]: W0715 05:08:37.738651 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.738753 kubelet[2724]: E0715 05:08:37.738682 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.739057 kubelet[2724]: E0715 05:08:37.739036 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.739057 kubelet[2724]: W0715 05:08:37.739047 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.739057 kubelet[2724]: E0715 05:08:37.739056 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.827910 kubelet[2724]: E0715 05:08:37.827854 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.827910 kubelet[2724]: W0715 05:08:37.827883 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.827910 kubelet[2724]: E0715 05:08:37.827906 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.828165 kubelet[2724]: E0715 05:08:37.828144 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.828165 kubelet[2724]: W0715 05:08:37.828158 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.828213 kubelet[2724]: E0715 05:08:37.828169 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.828517 kubelet[2724]: E0715 05:08:37.828491 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.828517 kubelet[2724]: W0715 05:08:37.828506 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.828517 kubelet[2724]: E0715 05:08:37.828517 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.828733 kubelet[2724]: E0715 05:08:37.828709 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.828733 kubelet[2724]: W0715 05:08:37.828720 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.828733 kubelet[2724]: E0715 05:08:37.828728 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.828899 kubelet[2724]: E0715 05:08:37.828885 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.828899 kubelet[2724]: W0715 05:08:37.828894 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.828950 kubelet[2724]: E0715 05:08:37.828902 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.829128 kubelet[2724]: E0715 05:08:37.829110 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.829128 kubelet[2724]: W0715 05:08:37.829121 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.829187 kubelet[2724]: E0715 05:08:37.829133 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.829457 kubelet[2724]: E0715 05:08:37.829437 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.829457 kubelet[2724]: W0715 05:08:37.829452 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.829521 kubelet[2724]: E0715 05:08:37.829464 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.829683 kubelet[2724]: E0715 05:08:37.829666 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.829683 kubelet[2724]: W0715 05:08:37.829677 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.829724 kubelet[2724]: E0715 05:08:37.829686 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.829855 kubelet[2724]: E0715 05:08:37.829840 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.829855 kubelet[2724]: W0715 05:08:37.829850 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.829855 kubelet[2724]: E0715 05:08:37.829857 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.830005 kubelet[2724]: E0715 05:08:37.829989 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.830005 kubelet[2724]: W0715 05:08:37.830001 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.830064 kubelet[2724]: E0715 05:08:37.830011 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.830213 kubelet[2724]: E0715 05:08:37.830195 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.830213 kubelet[2724]: W0715 05:08:37.830206 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.830294 kubelet[2724]: E0715 05:08:37.830214 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.830506 kubelet[2724]: E0715 05:08:37.830489 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.830506 kubelet[2724]: W0715 05:08:37.830503 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.830568 kubelet[2724]: E0715 05:08:37.830515 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.830762 kubelet[2724]: E0715 05:08:37.830732 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.830762 kubelet[2724]: W0715 05:08:37.830746 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.830762 kubelet[2724]: E0715 05:08:37.830756 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.830972 kubelet[2724]: E0715 05:08:37.830959 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.830972 kubelet[2724]: W0715 05:08:37.830970 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.831018 kubelet[2724]: E0715 05:08:37.830988 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.831178 kubelet[2724]: E0715 05:08:37.831165 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.831178 kubelet[2724]: W0715 05:08:37.831175 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.831260 kubelet[2724]: E0715 05:08:37.831185 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.831409 kubelet[2724]: E0715 05:08:37.831396 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.831409 kubelet[2724]: W0715 05:08:37.831406 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.831459 kubelet[2724]: E0715 05:08:37.831416 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.831733 kubelet[2724]: E0715 05:08:37.831715 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.831777 kubelet[2724]: W0715 05:08:37.831731 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.831777 kubelet[2724]: E0715 05:08:37.831745 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:37.832108 kubelet[2724]: E0715 05:08:37.832095 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:37.832108 kubelet[2724]: W0715 05:08:37.832105 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:37.832170 kubelet[2724]: E0715 05:08:37.832114 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.011272 kubelet[2724]: I0715 05:08:38.011152 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-785b946965-fhn9z" podStartSLOduration=2.579209577 podStartE2EDuration="9.011133618s" podCreationTimestamp="2025-07-15 05:08:29 +0000 UTC" firstStartedPulling="2025-07-15 05:08:30.111730589 +0000 UTC m=+28.807787202" lastFinishedPulling="2025-07-15 05:08:36.54365462 +0000 UTC m=+35.239711243" observedRunningTime="2025-07-15 05:08:38.011034894 +0000 UTC m=+36.707091537" watchObservedRunningTime="2025-07-15 05:08:38.011133618 +0000 UTC m=+36.707190231" Jul 15 05:08:38.541428 kubelet[2724]: E0715 05:08:38.540929 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:38.640171 kubelet[2724]: I0715 05:08:38.639448 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:08:38.640895 kubelet[2724]: E0715 05:08:38.640849 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:38.647165 kubelet[2724]: E0715 05:08:38.646459 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.647165 kubelet[2724]: W0715 05:08:38.646505 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.647165 kubelet[2724]: E0715 05:08:38.646541 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.647165 kubelet[2724]: E0715 05:08:38.646849 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.647165 kubelet[2724]: W0715 05:08:38.646859 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.647165 kubelet[2724]: E0715 05:08:38.646871 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.647540 kubelet[2724]: E0715 05:08:38.647291 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.647540 kubelet[2724]: W0715 05:08:38.647303 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.647540 kubelet[2724]: E0715 05:08:38.647316 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.647636 kubelet[2724]: E0715 05:08:38.647582 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.647636 kubelet[2724]: W0715 05:08:38.647592 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.647636 kubelet[2724]: E0715 05:08:38.647604 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.647974 kubelet[2724]: E0715 05:08:38.647954 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.648039 kubelet[2724]: W0715 05:08:38.647967 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.648039 kubelet[2724]: E0715 05:08:38.648006 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.648215 kubelet[2724]: E0715 05:08:38.648200 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.648284 kubelet[2724]: W0715 05:08:38.648247 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.648284 kubelet[2724]: E0715 05:08:38.648261 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.648513 kubelet[2724]: E0715 05:08:38.648462 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.648513 kubelet[2724]: W0715 05:08:38.648476 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.648513 kubelet[2724]: E0715 05:08:38.648488 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.648797 kubelet[2724]: E0715 05:08:38.648688 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.648797 kubelet[2724]: W0715 05:08:38.648698 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.648797 kubelet[2724]: E0715 05:08:38.648708 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.648913 kubelet[2724]: E0715 05:08:38.648891 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.648913 kubelet[2724]: W0715 05:08:38.648907 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.648994 kubelet[2724]: E0715 05:08:38.648918 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.649120 kubelet[2724]: E0715 05:08:38.649091 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.649120 kubelet[2724]: W0715 05:08:38.649105 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.649120 kubelet[2724]: E0715 05:08:38.649116 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.649323 kubelet[2724]: E0715 05:08:38.649305 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.649323 kubelet[2724]: W0715 05:08:38.649317 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.649406 kubelet[2724]: E0715 05:08:38.649327 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.649525 kubelet[2724]: E0715 05:08:38.649506 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.649525 kubelet[2724]: W0715 05:08:38.649519 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.649596 kubelet[2724]: E0715 05:08:38.649528 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.649743 kubelet[2724]: E0715 05:08:38.649711 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.649743 kubelet[2724]: W0715 05:08:38.649727 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.649743 kubelet[2724]: E0715 05:08:38.649737 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.649944 kubelet[2724]: E0715 05:08:38.649927 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.649944 kubelet[2724]: W0715 05:08:38.649939 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.650006 kubelet[2724]: E0715 05:08:38.649948 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.650158 kubelet[2724]: E0715 05:08:38.650136 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.650158 kubelet[2724]: W0715 05:08:38.650150 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.650158 kubelet[2724]: E0715 05:08:38.650160 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.736481 kubelet[2724]: E0715 05:08:38.736420 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.736481 kubelet[2724]: W0715 05:08:38.736452 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.736481 kubelet[2724]: E0715 05:08:38.736480 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.737092 kubelet[2724]: E0715 05:08:38.736995 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.737092 kubelet[2724]: W0715 05:08:38.737017 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.737092 kubelet[2724]: E0715 05:08:38.737028 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.737501 kubelet[2724]: E0715 05:08:38.737428 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.737501 kubelet[2724]: W0715 05:08:38.737475 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.737501 kubelet[2724]: E0715 05:08:38.737506 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.738128 kubelet[2724]: E0715 05:08:38.738064 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.738128 kubelet[2724]: W0715 05:08:38.738105 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.738243 kubelet[2724]: E0715 05:08:38.738136 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.738504 kubelet[2724]: E0715 05:08:38.738457 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.738504 kubelet[2724]: W0715 05:08:38.738473 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.738504 kubelet[2724]: E0715 05:08:38.738485 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.738825 kubelet[2724]: E0715 05:08:38.738755 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.738825 kubelet[2724]: W0715 05:08:38.738767 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.738825 kubelet[2724]: E0715 05:08:38.738776 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.739021 kubelet[2724]: E0715 05:08:38.738977 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.739021 kubelet[2724]: W0715 05:08:38.738987 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.739021 kubelet[2724]: E0715 05:08:38.738996 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.739243 kubelet[2724]: E0715 05:08:38.739205 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.739243 kubelet[2724]: W0715 05:08:38.739220 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.739325 kubelet[2724]: E0715 05:08:38.739252 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.739540 kubelet[2724]: E0715 05:08:38.739496 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.739540 kubelet[2724]: W0715 05:08:38.739510 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.739540 kubelet[2724]: E0715 05:08:38.739519 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.740707 kubelet[2724]: E0715 05:08:38.740683 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.740707 kubelet[2724]: W0715 05:08:38.740700 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.740798 kubelet[2724]: E0715 05:08:38.740714 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.742533 kubelet[2724]: E0715 05:08:38.742366 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.742533 kubelet[2724]: W0715 05:08:38.742395 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.742533 kubelet[2724]: E0715 05:08:38.742411 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.742820 kubelet[2724]: E0715 05:08:38.742671 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.742820 kubelet[2724]: W0715 05:08:38.742679 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.742820 kubelet[2724]: E0715 05:08:38.742688 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.742918 kubelet[2724]: E0715 05:08:38.742902 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.742918 kubelet[2724]: W0715 05:08:38.742912 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.742992 kubelet[2724]: E0715 05:08:38.742923 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.743407 kubelet[2724]: E0715 05:08:38.743359 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.743407 kubelet[2724]: W0715 05:08:38.743396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.743512 kubelet[2724]: E0715 05:08:38.743429 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.743989 kubelet[2724]: E0715 05:08:38.743953 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.743989 kubelet[2724]: W0715 05:08:38.743970 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.743989 kubelet[2724]: E0715 05:08:38.743982 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.744306 kubelet[2724]: E0715 05:08:38.744288 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.744306 kubelet[2724]: W0715 05:08:38.744301 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.744426 kubelet[2724]: E0715 05:08:38.744312 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.744698 kubelet[2724]: E0715 05:08:38.744596 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.744698 kubelet[2724]: W0715 05:08:38.744616 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.744698 kubelet[2724]: E0715 05:08:38.744627 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:38.744859 kubelet[2724]: E0715 05:08:38.744842 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 05:08:38.744859 kubelet[2724]: W0715 05:08:38.744854 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 05:08:38.744938 kubelet[2724]: E0715 05:08:38.744865 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 05:08:39.355697 containerd[1563]: time="2025-07-15T05:08:39.355605881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:39.356952 containerd[1563]: time="2025-07-15T05:08:39.356904025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 05:08:39.359643 containerd[1563]: time="2025-07-15T05:08:39.359528550Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:39.363015 containerd[1563]: time="2025-07-15T05:08:39.362965507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:39.363611 containerd[1563]: time="2025-07-15T05:08:39.363575352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.819674719s" Jul 15 05:08:39.363702 containerd[1563]: time="2025-07-15T05:08:39.363611423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 05:08:39.373370 containerd[1563]: time="2025-07-15T05:08:39.373313080Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 05:08:39.398268 containerd[1563]: time="2025-07-15T05:08:39.395466214Z" level=info msg="Container 8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:39.419706 containerd[1563]: time="2025-07-15T05:08:39.419619119Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\"" Jul 15 05:08:39.420632 containerd[1563]: time="2025-07-15T05:08:39.420569101Z" level=info msg="StartContainer for \"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\"" Jul 15 05:08:39.422639 containerd[1563]: time="2025-07-15T05:08:39.422545384Z" level=info msg="connecting to shim 8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89" address="unix:///run/containerd/s/ae5a3433fc6dd9d1bd53158a7c7495feb2cf7607b8f0a40e831cccea07c36a3a" protocol=ttrpc version=3 Jul 15 05:08:39.443468 systemd[1]: Started cri-containerd-8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89.scope - libcontainer container 8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89. Jul 15 05:08:39.512848 systemd[1]: cri-containerd-8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89.scope: Deactivated successfully. Jul 15 05:08:39.515121 containerd[1563]: time="2025-07-15T05:08:39.515016376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\" id:\"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\" pid:3476 exited_at:{seconds:1752556119 nanos:514012558}" Jul 15 05:08:39.585574 containerd[1563]: time="2025-07-15T05:08:39.585433487Z" level=info msg="received exit event container_id:\"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\" id:\"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\" pid:3476 exited_at:{seconds:1752556119 nanos:514012558}" Jul 15 05:08:39.589905 containerd[1563]: time="2025-07-15T05:08:39.589845866Z" level=info msg="StartContainer for \"8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89\" returns successfully" Jul 15 05:08:39.616760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ce089ca9799cd32a685e76176b10822382b3f03525f13bc52aebdde0936df89-rootfs.mount: Deactivated successfully. Jul 15 05:08:40.541395 kubelet[2724]: E0715 05:08:40.541354 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:40.658197 containerd[1563]: time="2025-07-15T05:08:40.657830145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 05:08:42.541465 kubelet[2724]: E0715 05:08:42.541405 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:44.542103 kubelet[2724]: E0715 05:08:44.540951 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:45.216414 containerd[1563]: time="2025-07-15T05:08:45.216341976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:45.217858 containerd[1563]: time="2025-07-15T05:08:45.217806696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 05:08:45.219262 containerd[1563]: time="2025-07-15T05:08:45.219206950Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:45.222988 containerd[1563]: time="2025-07-15T05:08:45.222939377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:08:45.223742 containerd[1563]: time="2025-07-15T05:08:45.223668621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.565789652s" Jul 15 05:08:45.223742 containerd[1563]: time="2025-07-15T05:08:45.223702478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 05:08:45.493509 containerd[1563]: time="2025-07-15T05:08:45.493459443Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 05:08:46.063058 containerd[1563]: time="2025-07-15T05:08:46.062979640Z" level=info msg="Container 917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:08:46.540679 kubelet[2724]: E0715 05:08:46.540587 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:46.557473 containerd[1563]: time="2025-07-15T05:08:46.557396146Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\"" Jul 15 05:08:46.558050 containerd[1563]: time="2025-07-15T05:08:46.558010264Z" level=info msg="StartContainer for \"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\"" Jul 15 05:08:46.559624 containerd[1563]: time="2025-07-15T05:08:46.559588234Z" level=info msg="connecting to shim 917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5" address="unix:///run/containerd/s/ae5a3433fc6dd9d1bd53158a7c7495feb2cf7607b8f0a40e831cccea07c36a3a" protocol=ttrpc version=3 Jul 15 05:08:46.582422 systemd[1]: Started cri-containerd-917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5.scope - libcontainer container 917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5. Jul 15 05:08:47.006338 containerd[1563]: time="2025-07-15T05:08:47.006202988Z" level=info msg="StartContainer for \"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\" returns successfully" Jul 15 05:08:47.715548 kubelet[2724]: I0715 05:08:47.715496 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:08:47.716053 kubelet[2724]: E0715 05:08:47.715881 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:48.007924 kubelet[2724]: E0715 05:08:48.007892 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:48.540561 kubelet[2724]: E0715 05:08:48.540507 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:50.541495 kubelet[2724]: E0715 05:08:50.541394 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:51.384214 systemd[1]: cri-containerd-917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5.scope: Deactivated successfully. Jul 15 05:08:51.384643 systemd[1]: cri-containerd-917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5.scope: Consumed 631ms CPU time, 178.2M memory peak, 3.7M read from disk, 171.2M written to disk. Jul 15 05:08:51.385444 containerd[1563]: time="2025-07-15T05:08:51.385393261Z" level=info msg="received exit event container_id:\"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\" id:\"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\" pid:3533 exited_at:{seconds:1752556131 nanos:385115089}" Jul 15 05:08:51.385781 containerd[1563]: time="2025-07-15T05:08:51.385501381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\" id:\"917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5\" pid:3533 exited_at:{seconds:1752556131 nanos:385115089}" Jul 15 05:08:51.410535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917d2bdaae0d5614f3263cc772ea684be1261938135a7536646d7f06ba7897a5-rootfs.mount: Deactivated successfully. Jul 15 05:08:51.413449 kubelet[2724]: I0715 05:08:51.413421 2724 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 05:08:51.909789 systemd[1]: Created slice kubepods-burstable-pod80af8562_af92_4803_83b3_1cf40e3ede5c.slice - libcontainer container kubepods-burstable-pod80af8562_af92_4803_83b3_1cf40e3ede5c.slice. Jul 15 05:08:52.027583 kubelet[2724]: I0715 05:08:52.027530 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80af8562-af92-4803-83b3-1cf40e3ede5c-config-volume\") pod \"coredns-674b8bbfcf-lt9vr\" (UID: \"80af8562-af92-4803-83b3-1cf40e3ede5c\") " pod="kube-system/coredns-674b8bbfcf-lt9vr" Jul 15 05:08:52.027583 kubelet[2724]: I0715 05:08:52.027570 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cfmg\" (UniqueName: \"kubernetes.io/projected/80af8562-af92-4803-83b3-1cf40e3ede5c-kube-api-access-6cfmg\") pod \"coredns-674b8bbfcf-lt9vr\" (UID: \"80af8562-af92-4803-83b3-1cf40e3ede5c\") " pod="kube-system/coredns-674b8bbfcf-lt9vr" Jul 15 05:08:52.035048 systemd[1]: Created slice kubepods-besteffort-pode057ae83_e632_47e8_840e_986d27f7a220.slice - libcontainer container kubepods-besteffort-pode057ae83_e632_47e8_840e_986d27f7a220.slice. Jul 15 05:08:52.128948 kubelet[2724]: I0715 05:08:52.128880 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e057ae83-e632-47e8-840e-986d27f7a220-whisker-backend-key-pair\") pod \"whisker-54d77d9754-rj5ss\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:08:52.128948 kubelet[2724]: I0715 05:08:52.128941 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e057ae83-e632-47e8-840e-986d27f7a220-whisker-ca-bundle\") pod \"whisker-54d77d9754-rj5ss\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:08:52.129284 kubelet[2724]: I0715 05:08:52.129206 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95wnr\" (UniqueName: \"kubernetes.io/projected/e057ae83-e632-47e8-840e-986d27f7a220-kube-api-access-95wnr\") pod \"whisker-54d77d9754-rj5ss\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:08:52.182691 systemd[1]: Created slice kubepods-besteffort-pod3b55c554_7db1_4ffb_9809_1daea65d2564.slice - libcontainer container kubepods-besteffort-pod3b55c554_7db1_4ffb_9809_1daea65d2564.slice. Jul 15 05:08:52.193638 containerd[1563]: time="2025-07-15T05:08:52.193560589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 05:08:52.199812 systemd[1]: Created slice kubepods-besteffort-podf4760452_f2f4_4f60_96c0_98723b588bdd.slice - libcontainer container kubepods-besteffort-podf4760452_f2f4_4f60_96c0_98723b588bdd.slice. Jul 15 05:08:52.208403 systemd[1]: Created slice kubepods-burstable-podc2b8bd3f_5cc6_4606_9465_ac2a98d9d525.slice - libcontainer container kubepods-burstable-podc2b8bd3f_5cc6_4606_9465_ac2a98d9d525.slice. Jul 15 05:08:52.213308 kubelet[2724]: E0715 05:08:52.213256 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.214185 containerd[1563]: time="2025-07-15T05:08:52.214109120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lt9vr,Uid:80af8562-af92-4803-83b3-1cf40e3ede5c,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:52.217285 systemd[1]: Created slice kubepods-besteffort-pod7586dcae_21ff_4670_8ba2_42aabb2605ad.slice - libcontainer container kubepods-besteffort-pod7586dcae_21ff_4670_8ba2_42aabb2605ad.slice. Jul 15 05:08:52.255624 systemd[1]: Created slice kubepods-besteffort-pod159a15a3_515e_4594_b1a8_5a98faa45752.slice - libcontainer container kubepods-besteffort-pod159a15a3_515e_4594_b1a8_5a98faa45752.slice. Jul 15 05:08:52.330076 kubelet[2724]: I0715 05:08:52.329991 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9gw\" (UniqueName: \"kubernetes.io/projected/159a15a3-515e-4594-b1a8-5a98faa45752-kube-api-access-2j9gw\") pod \"goldmane-768f4c5c69-zthqg\" (UID: \"159a15a3-515e-4594-b1a8-5a98faa45752\") " pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.330612 kubelet[2724]: I0715 05:08:52.330402 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7586dcae-21ff-4670-8ba2-42aabb2605ad-calico-apiserver-certs\") pod \"calico-apiserver-5dd97f6948-c8knc\" (UID: \"7586dcae-21ff-4670-8ba2-42aabb2605ad\") " pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" Jul 15 05:08:52.330612 kubelet[2724]: I0715 05:08:52.330486 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/159a15a3-515e-4594-b1a8-5a98faa45752-config\") pod \"goldmane-768f4c5c69-zthqg\" (UID: \"159a15a3-515e-4594-b1a8-5a98faa45752\") " pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.330612 kubelet[2724]: I0715 05:08:52.330513 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2b8bd3f-5cc6-4606-9465-ac2a98d9d525-config-volume\") pod \"coredns-674b8bbfcf-wl9ww\" (UID: \"c2b8bd3f-5cc6-4606-9465-ac2a98d9d525\") " pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:08:52.331298 kubelet[2724]: I0715 05:08:52.331272 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159a15a3-515e-4594-b1a8-5a98faa45752-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-zthqg\" (UID: \"159a15a3-515e-4594-b1a8-5a98faa45752\") " pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.331639 kubelet[2724]: I0715 05:08:52.331395 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/159a15a3-515e-4594-b1a8-5a98faa45752-goldmane-key-pair\") pod \"goldmane-768f4c5c69-zthqg\" (UID: \"159a15a3-515e-4594-b1a8-5a98faa45752\") " pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.331639 kubelet[2724]: I0715 05:08:52.331426 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48vs\" (UniqueName: \"kubernetes.io/projected/7586dcae-21ff-4670-8ba2-42aabb2605ad-kube-api-access-n48vs\") pod \"calico-apiserver-5dd97f6948-c8knc\" (UID: \"7586dcae-21ff-4670-8ba2-42aabb2605ad\") " pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" Jul 15 05:08:52.331639 kubelet[2724]: I0715 05:08:52.331454 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4760452-f2f4-4f60-96c0-98723b588bdd-tigera-ca-bundle\") pod \"calico-kube-controllers-649b8d6d45-t5xtk\" (UID: \"f4760452-f2f4-4f60-96c0-98723b588bdd\") " pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:08:52.331639 kubelet[2724]: I0715 05:08:52.331478 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptp8v\" (UniqueName: \"kubernetes.io/projected/f4760452-f2f4-4f60-96c0-98723b588bdd-kube-api-access-ptp8v\") pod \"calico-kube-controllers-649b8d6d45-t5xtk\" (UID: \"f4760452-f2f4-4f60-96c0-98723b588bdd\") " pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:08:52.331639 kubelet[2724]: I0715 05:08:52.331505 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b55c554-7db1-4ffb-9809-1daea65d2564-calico-apiserver-certs\") pod \"calico-apiserver-5dd97f6948-9pczz\" (UID: \"3b55c554-7db1-4ffb-9809-1daea65d2564\") " pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:08:52.331845 kubelet[2724]: I0715 05:08:52.331525 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpnp\" (UniqueName: \"kubernetes.io/projected/c2b8bd3f-5cc6-4606-9465-ac2a98d9d525-kube-api-access-mlpnp\") pod \"coredns-674b8bbfcf-wl9ww\" (UID: \"c2b8bd3f-5cc6-4606-9465-ac2a98d9d525\") " pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:08:52.331845 kubelet[2724]: I0715 05:08:52.331550 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwmmb\" (UniqueName: \"kubernetes.io/projected/3b55c554-7db1-4ffb-9809-1daea65d2564-kube-api-access-hwmmb\") pod \"calico-apiserver-5dd97f6948-9pczz\" (UID: \"3b55c554-7db1-4ffb-9809-1daea65d2564\") " pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:08:52.339162 containerd[1563]: time="2025-07-15T05:08:52.339092823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d77d9754-rj5ss,Uid:e057ae83-e632-47e8-840e-986d27f7a220,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:52.340837 containerd[1563]: time="2025-07-15T05:08:52.340758065Z" level=error msg="Failed to destroy network for sandbox \"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.345114 containerd[1563]: time="2025-07-15T05:08:52.345038508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lt9vr,Uid:80af8562-af92-4803-83b3-1cf40e3ede5c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.345393 kubelet[2724]: E0715 05:08:52.345342 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.345567 kubelet[2724]: E0715 05:08:52.345432 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lt9vr" Jul 15 05:08:52.345567 kubelet[2724]: E0715 05:08:52.345457 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lt9vr" Jul 15 05:08:52.345567 kubelet[2724]: E0715 05:08:52.345526 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lt9vr_kube-system(80af8562-af92-4803-83b3-1cf40e3ede5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lt9vr_kube-system(80af8562-af92-4803-83b3-1cf40e3ede5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deee71126643e79845ac69cfc6be7a9840ea842f217d134dc793b24ea01e84aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lt9vr" podUID="80af8562-af92-4803-83b3-1cf40e3ede5c" Jul 15 05:08:52.403003 containerd[1563]: time="2025-07-15T05:08:52.402919033Z" level=error msg="Failed to destroy network for sandbox \"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.404682 containerd[1563]: time="2025-07-15T05:08:52.404599183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d77d9754-rj5ss,Uid:e057ae83-e632-47e8-840e-986d27f7a220,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.405020 kubelet[2724]: E0715 05:08:52.404956 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.405085 kubelet[2724]: E0715 05:08:52.405047 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:08:52.405085 kubelet[2724]: E0715 05:08:52.405073 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:08:52.405182 kubelet[2724]: E0715 05:08:52.405146 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54d77d9754-rj5ss_calico-system(e057ae83-e632-47e8-840e-986d27f7a220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54d77d9754-rj5ss_calico-system(e057ae83-e632-47e8-840e-986d27f7a220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab4c7615b66acac2d7ff21480e9046fd022f0892a0ed56a56d17a00f857e4a65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54d77d9754-rj5ss" podUID="e057ae83-e632-47e8-840e-986d27f7a220" Jul 15 05:08:52.409697 systemd[1]: run-netns-cni\x2de00ec263\x2dc366\x2ddf9a\x2d7559\x2d18e440ff1f13.mount: Deactivated successfully. Jul 15 05:08:52.495055 containerd[1563]: time="2025-07-15T05:08:52.494995687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:08:52.506120 containerd[1563]: time="2025-07-15T05:08:52.506064545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:52.514035 kubelet[2724]: E0715 05:08:52.513986 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:08:52.514842 containerd[1563]: time="2025-07-15T05:08:52.514805000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,}" Jul 15 05:08:52.532620 containerd[1563]: time="2025-07-15T05:08:52.532163630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-c8knc,Uid:7586dcae-21ff-4670-8ba2-42aabb2605ad,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:08:52.551055 systemd[1]: Created slice kubepods-besteffort-podb9ac2d87_9174_4659_a668_21a00a83e356.slice - libcontainer container kubepods-besteffort-podb9ac2d87_9174_4659_a668_21a00a83e356.slice. Jul 15 05:08:52.563642 containerd[1563]: time="2025-07-15T05:08:52.563578326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:52.564081 containerd[1563]: time="2025-07-15T05:08:52.564027459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ccf82,Uid:b9ac2d87-9174-4659-a668-21a00a83e356,Namespace:calico-system,Attempt:0,}" Jul 15 05:08:52.588960 containerd[1563]: time="2025-07-15T05:08:52.588739857Z" level=error msg="Failed to destroy network for sandbox \"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.614940 containerd[1563]: time="2025-07-15T05:08:52.614737937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.615313 kubelet[2724]: E0715 05:08:52.615093 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.615313 kubelet[2724]: E0715 05:08:52.615176 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:08:52.615313 kubelet[2724]: E0715 05:08:52.615212 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:08:52.615443 kubelet[2724]: E0715 05:08:52.615305 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd97f6948-9pczz_calico-apiserver(3b55c554-7db1-4ffb-9809-1daea65d2564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd97f6948-9pczz_calico-apiserver(3b55c554-7db1-4ffb-9809-1daea65d2564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e4bb54736b4adc247ea0054f1e2b2b2c87e49f09b48d8462d15dac519ad522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" podUID="3b55c554-7db1-4ffb-9809-1daea65d2564" Jul 15 05:08:52.637672 containerd[1563]: time="2025-07-15T05:08:52.637599030Z" level=error msg="Failed to destroy network for sandbox \"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.645494 containerd[1563]: time="2025-07-15T05:08:52.645295162Z" level=error msg="Failed to destroy network for sandbox \"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.646252 containerd[1563]: time="2025-07-15T05:08:52.646150066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.646745 kubelet[2724]: E0715 05:08:52.646691 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.646818 kubelet[2724]: E0715 05:08:52.646779 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:08:52.646866 kubelet[2724]: E0715 05:08:52.646810 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:08:52.647247 kubelet[2724]: E0715 05:08:52.646882 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wl9ww_kube-system(c2b8bd3f-5cc6-4606-9465-ac2a98d9d525)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wl9ww_kube-system(c2b8bd3f-5cc6-4606-9465-ac2a98d9d525)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97809fd2a172a1fe6667d7e4e9ef72c7bae4c2682fd8bbfde2ca754777c7676\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wl9ww" podUID="c2b8bd3f-5cc6-4606-9465-ac2a98d9d525" Jul 15 05:08:52.653269 containerd[1563]: time="2025-07-15T05:08:52.653182667Z" level=error msg="Failed to destroy network for sandbox \"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.655468 containerd[1563]: time="2025-07-15T05:08:52.655386877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.655813 kubelet[2724]: E0715 05:08:52.655755 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.655932 kubelet[2724]: E0715 05:08:52.655844 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:08:52.655932 kubelet[2724]: E0715 05:08:52.655873 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:08:52.656076 kubelet[2724]: E0715 05:08:52.655941 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-649b8d6d45-t5xtk_calico-system(f4760452-f2f4-4f60-96c0-98723b588bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-649b8d6d45-t5xtk_calico-system(f4760452-f2f4-4f60-96c0-98723b588bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58e34ccf9d2c008a5c4b7605dfeb88c4f7c1e08f56e88337a2724ba03b124d93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" podUID="f4760452-f2f4-4f60-96c0-98723b588bdd" Jul 15 05:08:52.657297 containerd[1563]: time="2025-07-15T05:08:52.657161251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-c8knc,Uid:7586dcae-21ff-4670-8ba2-42aabb2605ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.657623 kubelet[2724]: E0715 05:08:52.657500 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.657672 kubelet[2724]: E0715 05:08:52.657625 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" Jul 15 05:08:52.657825 kubelet[2724]: E0715 05:08:52.657680 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" Jul 15 05:08:52.657825 kubelet[2724]: E0715 05:08:52.657754 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd97f6948-c8knc_calico-apiserver(7586dcae-21ff-4670-8ba2-42aabb2605ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd97f6948-c8knc_calico-apiserver(7586dcae-21ff-4670-8ba2-42aabb2605ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"128a5893a701df9ae5b726e4fd2eef3ec18aa4f24aa946eaffc4cd766749d6b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" podUID="7586dcae-21ff-4670-8ba2-42aabb2605ad" Jul 15 05:08:52.693963 containerd[1563]: time="2025-07-15T05:08:52.693877276Z" level=error msg="Failed to destroy network for sandbox \"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.695986 containerd[1563]: time="2025-07-15T05:08:52.695931314Z" level=error msg="Failed to destroy network for sandbox \"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.825977 containerd[1563]: time="2025-07-15T05:08:52.825764318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ccf82,Uid:b9ac2d87-9174-4659-a668-21a00a83e356,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.826400 kubelet[2724]: E0715 05:08:52.826215 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.826400 kubelet[2724]: E0715 05:08:52.826366 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:52.826400 kubelet[2724]: E0715 05:08:52.826397 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ccf82" Jul 15 05:08:52.826728 kubelet[2724]: E0715 05:08:52.826483 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ccf82_calico-system(b9ac2d87-9174-4659-a668-21a00a83e356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ccf82_calico-system(b9ac2d87-9174-4659-a668-21a00a83e356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcfdc3e5012cbcfca3e87a3bd0c84cda619ae1d02181ac50caf89beb6188b22f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ccf82" podUID="b9ac2d87-9174-4659-a668-21a00a83e356" Jul 15 05:08:52.957676 containerd[1563]: time="2025-07-15T05:08:52.957579500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.957958 kubelet[2724]: E0715 05:08:52.957889 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:08:52.958060 kubelet[2724]: E0715 05:08:52.957977 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.958060 kubelet[2724]: E0715 05:08:52.958002 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:08:52.958122 kubelet[2724]: E0715 05:08:52.958060 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-zthqg_calico-system(159a15a3-515e-4594-b1a8-5a98faa45752)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-zthqg_calico-system(159a15a3-515e-4594-b1a8-5a98faa45752)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e2f8223c37c3ba9caece66412686b0a2087a119ff9c65f23bef6e123ab2b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-zthqg" podUID="159a15a3-515e-4594-b1a8-5a98faa45752" Jul 15 05:08:59.800168 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:45614.service - OpenSSH per-connection server daemon (10.0.0.1:45614). Jul 15 05:08:59.884485 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 45614 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:08:59.887178 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:08:59.895909 systemd-logind[1524]: New session 8 of user core. Jul 15 05:08:59.901480 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:09:00.114189 sshd[3852]: Connection closed by 10.0.0.1 port 45614 Jul 15 05:09:00.114479 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:00.120928 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:45614.service: Deactivated successfully. Jul 15 05:09:00.123026 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:09:00.123972 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:09:00.125748 systemd-logind[1524]: Removed session 8. Jul 15 05:09:03.542012 containerd[1563]: time="2025-07-15T05:09:03.541953753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d77d9754-rj5ss,Uid:e057ae83-e632-47e8-840e-986d27f7a220,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:04.541598 kubelet[2724]: E0715 05:09:04.541528 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:04.542343 containerd[1563]: time="2025-07-15T05:09:04.541936822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,}" Jul 15 05:09:04.542343 containerd[1563]: time="2025-07-15T05:09:04.541939286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:04.818747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442602956.mount: Deactivated successfully. Jul 15 05:09:04.932994 containerd[1563]: time="2025-07-15T05:09:04.932903771Z" level=error msg="Failed to destroy network for sandbox \"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:04.938604 systemd[1]: run-netns-cni\x2ddef910b7\x2d096e\x2df2a5\x2d366a\x2d6830ec7ae407.mount: Deactivated successfully. Jul 15 05:09:04.962451 containerd[1563]: time="2025-07-15T05:09:04.962377951Z" level=error msg="Failed to destroy network for sandbox \"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:04.971165 containerd[1563]: time="2025-07-15T05:09:04.970884228Z" level=error msg="Failed to destroy network for sandbox \"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.137949 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:45628.service - OpenSSH per-connection server daemon (10.0.0.1:45628). Jul 15 05:09:05.426319 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 45628 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:05.428059 sshd-session[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:05.435009 systemd-logind[1524]: New session 9 of user core. Jul 15 05:09:05.443397 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:09:05.542059 containerd[1563]: time="2025-07-15T05:09:05.541998691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:05.542373 containerd[1563]: time="2025-07-15T05:09:05.542320555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:09:05.753431 systemd[1]: run-netns-cni\x2dffa8723b\x2d3102\x2d9e23\x2d53f5\x2d68c98c774152.mount: Deactivated successfully. Jul 15 05:09:05.753557 systemd[1]: run-netns-cni\x2dbb6a9aea\x2d5baf\x2db0fc\x2dc605\x2dc080f55d6e1e.mount: Deactivated successfully. Jul 15 05:09:05.789764 containerd[1563]: time="2025-07-15T05:09:05.789691892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d77d9754-rj5ss,Uid:e057ae83-e632-47e8-840e-986d27f7a220,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.790119 kubelet[2724]: E0715 05:09:05.790044 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.790545 kubelet[2724]: E0715 05:09:05.790138 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:09:05.790545 kubelet[2724]: E0715 05:09:05.790177 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d77d9754-rj5ss" Jul 15 05:09:05.790545 kubelet[2724]: E0715 05:09:05.790266 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54d77d9754-rj5ss_calico-system(e057ae83-e632-47e8-840e-986d27f7a220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54d77d9754-rj5ss_calico-system(e057ae83-e632-47e8-840e-986d27f7a220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99b15cae64c796393faecd6f7f19a942d17fd4953c317bff36193365c2beba5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54d77d9754-rj5ss" podUID="e057ae83-e632-47e8-840e-986d27f7a220" Jul 15 05:09:05.792198 containerd[1563]: time="2025-07-15T05:09:05.792110593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.792452 kubelet[2724]: E0715 05:09:05.792380 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.792521 kubelet[2724]: E0715 05:09:05.792449 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:09:05.792521 kubelet[2724]: E0715 05:09:05.792470 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wl9ww" Jul 15 05:09:05.792598 kubelet[2724]: E0715 05:09:05.792519 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wl9ww_kube-system(c2b8bd3f-5cc6-4606-9465-ac2a98d9d525)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wl9ww_kube-system(c2b8bd3f-5cc6-4606-9465-ac2a98d9d525)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"315852ef2a91c0c1c899dce82fa7af3b2d8dd32def21f6f81349c22b8e060a4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wl9ww" podUID="c2b8bd3f-5cc6-4606-9465-ac2a98d9d525" Jul 15 05:09:05.794380 containerd[1563]: time="2025-07-15T05:09:05.794218891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.794612 kubelet[2724]: E0715 05:09:05.794544 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.794667 kubelet[2724]: E0715 05:09:05.794632 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:09:05.794667 kubelet[2724]: E0715 05:09:05.794661 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" Jul 15 05:09:05.794800 kubelet[2724]: E0715 05:09:05.794753 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-649b8d6d45-t5xtk_calico-system(f4760452-f2f4-4f60-96c0-98723b588bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-649b8d6d45-t5xtk_calico-system(f4760452-f2f4-4f60-96c0-98723b588bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"058d31d1764a344d586c675b31ba2fa992f506f2dcf9fbedce0ae83d9f65831c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" podUID="f4760452-f2f4-4f60-96c0-98723b588bdd" Jul 15 05:09:05.812876 containerd[1563]: time="2025-07-15T05:09:05.812408422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 05:09:05.813127 containerd[1563]: time="2025-07-15T05:09:05.813100832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:05.815916 containerd[1563]: time="2025-07-15T05:09:05.815867286Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:05.816306 containerd[1563]: time="2025-07-15T05:09:05.816278638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.622661501s" Jul 15 05:09:05.816385 containerd[1563]: time="2025-07-15T05:09:05.816313764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 05:09:05.816629 containerd[1563]: time="2025-07-15T05:09:05.816604851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:05.853770 containerd[1563]: time="2025-07-15T05:09:05.853711268Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 05:09:05.887293 containerd[1563]: time="2025-07-15T05:09:05.886967516Z" level=info msg="Container d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:05.906131 containerd[1563]: time="2025-07-15T05:09:05.906072786Z" level=info msg="CreateContainer within sandbox \"d6be8084b05dbf59c5aa267c2ba1632c4ce12993a7e06d0114d36c2422bae3b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\"" Jul 15 05:09:05.907592 containerd[1563]: time="2025-07-15T05:09:05.907552124Z" level=info msg="StartContainer for \"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\"" Jul 15 05:09:05.908415 containerd[1563]: time="2025-07-15T05:09:05.908314024Z" level=error msg="Failed to destroy network for sandbox \"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.910077 containerd[1563]: time="2025-07-15T05:09:05.909807307Z" level=error msg="Failed to destroy network for sandbox \"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.910077 containerd[1563]: time="2025-07-15T05:09:05.909922303Z" level=info msg="connecting to shim d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e" address="unix:///run/containerd/s/ae5a3433fc6dd9d1bd53158a7c7495feb2cf7607b8f0a40e831cccea07c36a3a" protocol=ttrpc version=3 Jul 15 05:09:05.910348 containerd[1563]: time="2025-07-15T05:09:05.910289542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.911333 kubelet[2724]: E0715 05:09:05.911273 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.911333 kubelet[2724]: E0715 05:09:05.911335 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:09:05.911333 kubelet[2724]: E0715 05:09:05.911358 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zthqg" Jul 15 05:09:05.911626 kubelet[2724]: E0715 05:09:05.911424 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-zthqg_calico-system(159a15a3-515e-4594-b1a8-5a98faa45752)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-zthqg_calico-system(159a15a3-515e-4594-b1a8-5a98faa45752)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfd3edafc197e8bb9b76bcad8a883a5bf70346a6153da116d25aad530b4d6ce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-zthqg" podUID="159a15a3-515e-4594-b1a8-5a98faa45752" Jul 15 05:09:05.911912 containerd[1563]: time="2025-07-15T05:09:05.911428752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.912086 kubelet[2724]: E0715 05:09:05.912051 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 05:09:05.912165 kubelet[2724]: E0715 05:09:05.912110 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:09:05.912165 kubelet[2724]: E0715 05:09:05.912132 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" Jul 15 05:09:05.912301 kubelet[2724]: E0715 05:09:05.912198 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd97f6948-9pczz_calico-apiserver(3b55c554-7db1-4ffb-9809-1daea65d2564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd97f6948-9pczz_calico-apiserver(3b55c554-7db1-4ffb-9809-1daea65d2564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec24ae6e0bd29e616a6036651f9013b9f944c59e539f10a622ab49bf8e8935c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" podUID="3b55c554-7db1-4ffb-9809-1daea65d2564" Jul 15 05:09:05.920367 sshd[3970]: Connection closed by 10.0.0.1 port 45628 Jul 15 05:09:05.920944 sshd-session[3967]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:05.928199 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:45628.service: Deactivated successfully. Jul 15 05:09:05.930546 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:09:05.931775 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:09:05.933464 systemd-logind[1524]: Removed session 9. Jul 15 05:09:06.023537 systemd[1]: Started cri-containerd-d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e.scope - libcontainer container d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e. Jul 15 05:09:06.129252 containerd[1563]: time="2025-07-15T05:09:06.129168107Z" level=info msg="StartContainer for \"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\" returns successfully" Jul 15 05:09:06.221705 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 05:09:06.223474 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 05:09:06.534395 kubelet[2724]: I0715 05:09:06.534343 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e057ae83-e632-47e8-840e-986d27f7a220-whisker-ca-bundle\") pod \"e057ae83-e632-47e8-840e-986d27f7a220\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " Jul 15 05:09:06.534395 kubelet[2724]: I0715 05:09:06.534397 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95wnr\" (UniqueName: \"kubernetes.io/projected/e057ae83-e632-47e8-840e-986d27f7a220-kube-api-access-95wnr\") pod \"e057ae83-e632-47e8-840e-986d27f7a220\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " Jul 15 05:09:06.534613 kubelet[2724]: I0715 05:09:06.534429 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e057ae83-e632-47e8-840e-986d27f7a220-whisker-backend-key-pair\") pod \"e057ae83-e632-47e8-840e-986d27f7a220\" (UID: \"e057ae83-e632-47e8-840e-986d27f7a220\") " Jul 15 05:09:06.535738 kubelet[2724]: I0715 05:09:06.535681 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e057ae83-e632-47e8-840e-986d27f7a220-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e057ae83-e632-47e8-840e-986d27f7a220" (UID: "e057ae83-e632-47e8-840e-986d27f7a220"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 05:09:06.539027 kubelet[2724]: I0715 05:09:06.538992 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e057ae83-e632-47e8-840e-986d27f7a220-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e057ae83-e632-47e8-840e-986d27f7a220" (UID: "e057ae83-e632-47e8-840e-986d27f7a220"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 05:09:06.539100 kubelet[2724]: I0715 05:09:06.539075 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e057ae83-e632-47e8-840e-986d27f7a220-kube-api-access-95wnr" (OuterVolumeSpecName: "kube-api-access-95wnr") pod "e057ae83-e632-47e8-840e-986d27f7a220" (UID: "e057ae83-e632-47e8-840e-986d27f7a220"). InnerVolumeSpecName "kube-api-access-95wnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 05:09:06.635508 kubelet[2724]: I0715 05:09:06.635418 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e057ae83-e632-47e8-840e-986d27f7a220-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 15 05:09:06.635508 kubelet[2724]: I0715 05:09:06.635471 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e057ae83-e632-47e8-840e-986d27f7a220-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 15 05:09:06.635508 kubelet[2724]: I0715 05:09:06.635483 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-95wnr\" (UniqueName: \"kubernetes.io/projected/e057ae83-e632-47e8-840e-986d27f7a220-kube-api-access-95wnr\") on node \"localhost\" DevicePath \"\"" Jul 15 05:09:06.754183 systemd[1]: run-netns-cni\x2d106a416c\x2dc838\x2d2a90\x2d8a41\x2d8af3865cfcd8.mount: Deactivated successfully. Jul 15 05:09:06.754319 systemd[1]: run-netns-cni\x2d983e41ca\x2dfe57\x2d5a07\x2d425e\x2dacc0772e9c94.mount: Deactivated successfully. Jul 15 05:09:06.754394 systemd[1]: var-lib-kubelet-pods-e057ae83\x2de632\x2d47e8\x2d840e\x2d986d27f7a220-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d95wnr.mount: Deactivated successfully. Jul 15 05:09:06.754474 systemd[1]: var-lib-kubelet-pods-e057ae83\x2de632\x2d47e8\x2d840e\x2d986d27f7a220-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 05:09:07.095556 systemd[1]: Removed slice kubepods-besteffort-pode057ae83_e632_47e8_840e_986d27f7a220.slice - libcontainer container kubepods-besteffort-pode057ae83_e632_47e8_840e_986d27f7a220.slice. Jul 15 05:09:07.113985 kubelet[2724]: I0715 05:09:07.113795 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vlrz2" podStartSLOduration=4.944690748 podStartE2EDuration="38.113770656s" podCreationTimestamp="2025-07-15 05:08:29 +0000 UTC" firstStartedPulling="2025-07-15 05:08:32.650787317 +0000 UTC m=+31.346843930" lastFinishedPulling="2025-07-15 05:09:05.819867225 +0000 UTC m=+64.515923838" observedRunningTime="2025-07-15 05:09:07.110817733 +0000 UTC m=+65.806874356" watchObservedRunningTime="2025-07-15 05:09:07.113770656 +0000 UTC m=+65.809827269" Jul 15 05:09:07.185981 systemd[1]: Created slice kubepods-besteffort-podb3024758_bf49_4329_97db_e401827f9c09.slice - libcontainer container kubepods-besteffort-podb3024758_bf49_4329_97db_e401827f9c09.slice. Jul 15 05:09:07.261480 containerd[1563]: time="2025-07-15T05:09:07.261421788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\" id:\"ad8aa2fc0324407a6443afad0144a28e71ffa740cef7f43af601dfb570a1f11e\" pid:4127 exit_status:1 exited_at:{seconds:1752556147 nanos:260992662}" Jul 15 05:09:07.341319 kubelet[2724]: I0715 05:09:07.341198 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3024758-bf49-4329-97db-e401827f9c09-whisker-ca-bundle\") pod \"whisker-7945779b74-nrtz2\" (UID: \"b3024758-bf49-4329-97db-e401827f9c09\") " pod="calico-system/whisker-7945779b74-nrtz2" Jul 15 05:09:07.341319 kubelet[2724]: I0715 05:09:07.341310 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3024758-bf49-4329-97db-e401827f9c09-whisker-backend-key-pair\") pod \"whisker-7945779b74-nrtz2\" (UID: \"b3024758-bf49-4329-97db-e401827f9c09\") " pod="calico-system/whisker-7945779b74-nrtz2" Jul 15 05:09:07.341319 kubelet[2724]: I0715 05:09:07.341335 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cdw6\" (UniqueName: \"kubernetes.io/projected/b3024758-bf49-4329-97db-e401827f9c09-kube-api-access-8cdw6\") pod \"whisker-7945779b74-nrtz2\" (UID: \"b3024758-bf49-4329-97db-e401827f9c09\") " pod="calico-system/whisker-7945779b74-nrtz2" Jul 15 05:09:07.493102 containerd[1563]: time="2025-07-15T05:09:07.493018569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7945779b74-nrtz2,Uid:b3024758-bf49-4329-97db-e401827f9c09,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:07.541950 containerd[1563]: time="2025-07-15T05:09:07.541868626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-c8knc,Uid:7586dcae-21ff-4670-8ba2-42aabb2605ad,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:09:07.542793 containerd[1563]: time="2025-07-15T05:09:07.542743271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ccf82,Uid:b9ac2d87-9174-4659-a668-21a00a83e356,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:07.543207 kubelet[2724]: E0715 05:09:07.542858 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:07.544040 containerd[1563]: time="2025-07-15T05:09:07.544009983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lt9vr,Uid:80af8562-af92-4803-83b3-1cf40e3ede5c,Namespace:kube-system,Attempt:0,}" Jul 15 05:09:07.545888 kubelet[2724]: I0715 05:09:07.545841 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e057ae83-e632-47e8-840e-986d27f7a220" path="/var/lib/kubelet/pods/e057ae83-e632-47e8-840e-986d27f7a220/volumes" Jul 15 05:09:07.861728 systemd-networkd[1481]: calib439eb94313: Link UP Jul 15 05:09:07.864115 systemd-networkd[1481]: calib439eb94313: Gained carrier Jul 15 05:09:07.900423 containerd[1563]: 2025-07-15 05:09:07.603 [INFO][4178] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:09:07.900423 containerd[1563]: 2025-07-15 05:09:07.615 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0 coredns-674b8bbfcf- kube-system 80af8562-af92-4803-83b3-1cf40e3ede5c 936 0 2025-07-15 05:08:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lt9vr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib439eb94313 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-" Jul 15 05:09:07.900423 containerd[1563]: 2025-07-15 05:09:07.615 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.900423 containerd[1563]: 2025-07-15 05:09:07.766 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" HandleID="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Workload="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" HandleID="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Workload="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034e3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lt9vr", "timestamp":"2025-07-15 05:09:07.766870023 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.787 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" host="localhost" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.802 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.811 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.814 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.817 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:07.900801 containerd[1563]: 2025-07-15 05:09:07.818 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" host="localhost" Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.820 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.826 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" host="localhost" Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.841 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" host="localhost" Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.841 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" host="localhost" Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.841 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:07.901115 containerd[1563]: 2025-07-15 05:09:07.841 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" HandleID="k8s-pod-network.4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Workload="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.901305 containerd[1563]: 2025-07-15 05:09:07.846 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80af8562-af92-4803-83b3-1cf40e3ede5c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lt9vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib439eb94313", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:07.901414 containerd[1563]: 2025-07-15 05:09:07.846 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.901414 containerd[1563]: 2025-07-15 05:09:07.846 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib439eb94313 ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.901414 containerd[1563]: 2025-07-15 05:09:07.863 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.901514 containerd[1563]: 2025-07-15 05:09:07.863 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80af8562-af92-4803-83b3-1cf40e3ede5c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde", Pod:"coredns-674b8bbfcf-lt9vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib439eb94313", MAC:"ba:35:f3:11:43:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:07.901514 containerd[1563]: 2025-07-15 05:09:07.892 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" Namespace="kube-system" Pod="coredns-674b8bbfcf-lt9vr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lt9vr-eth0" Jul 15 05:09:07.984590 systemd-networkd[1481]: cali578cae8e7d4: Link UP Jul 15 05:09:07.995248 systemd-networkd[1481]: cali578cae8e7d4: Gained carrier Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.592 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.605 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ccf82-eth0 csi-node-driver- calico-system b9ac2d87-9174-4659-a668-21a00a83e356 804 0 2025-07-15 05:08:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ccf82 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali578cae8e7d4 [] [] }} ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.605 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.766 [INFO][4205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" HandleID="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Workload="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" HandleID="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Workload="localhost-k8s-csi--node--driver--ccf82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000477d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ccf82", "timestamp":"2025-07-15 05:09:07.766778502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.842 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.842 [INFO][4205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.889 [INFO][4205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.905 [INFO][4205] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.920 [INFO][4205] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.925 [INFO][4205] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.931 [INFO][4205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.931 [INFO][4205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.936 [INFO][4205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.944 [INFO][4205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.953 [INFO][4205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.954 [INFO][4205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" host="localhost" Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.954 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:08.024845 containerd[1563]: 2025-07-15 05:09:07.954 [INFO][4205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" HandleID="k8s-pod-network.8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Workload="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:07.967 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ccf82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9ac2d87-9174-4659-a668-21a00a83e356", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ccf82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali578cae8e7d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:07.969 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:07.969 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali578cae8e7d4 ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:07.992 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:08.003 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ccf82-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9ac2d87-9174-4659-a668-21a00a83e356", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa", Pod:"csi-node-driver-ccf82", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali578cae8e7d4", MAC:"ee:73:ca:cb:2f:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.027439 containerd[1563]: 2025-07-15 05:09:08.020 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" Namespace="calico-system" Pod="csi-node-driver-ccf82" WorkloadEndpoint="localhost-k8s-csi--node--driver--ccf82-eth0" Jul 15 05:09:08.080779 systemd-networkd[1481]: cali3b479b39622: Link UP Jul 15 05:09:08.081916 systemd-networkd[1481]: cali3b479b39622: Gained carrier Jul 15 05:09:08.130949 containerd[1563]: time="2025-07-15T05:09:08.130769093Z" level=info msg="connecting to shim 8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa" address="unix:///run/containerd/s/fd3ac80a8a47b4e344aea38fd6f0da7d1270c755c5335f1f53c4dcf6c04f2d59" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:08.131926 containerd[1563]: time="2025-07-15T05:09:08.131893019Z" level=info msg="connecting to shim 4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde" address="unix:///run/containerd/s/94bfeda44896430eb3e7093f66d0a1771bf28597d0b654aab3520b1bb4c78df1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.530 [INFO][4142] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.605 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7945779b74--nrtz2-eth0 whisker-7945779b74- calico-system b3024758-bf49-4329-97db-e401827f9c09 1074 0 2025-07-15 05:09:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7945779b74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7945779b74-nrtz2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3b479b39622 [] [] }} ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.605 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.766 [INFO][4204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" HandleID="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Workload="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" HandleID="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Workload="localhost-k8s-whisker--7945779b74--nrtz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f5260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7945779b74-nrtz2", "timestamp":"2025-07-15 05:09:07.766783211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.954 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.957 [INFO][4204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:07.992 [INFO][4204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.015 [INFO][4204] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.029 [INFO][4204] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.037 [INFO][4204] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.044 [INFO][4204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.044 [INFO][4204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.048 [INFO][4204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.056 [INFO][4204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.067 [INFO][4204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.067 [INFO][4204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" host="localhost" Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.070 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:08.134670 containerd[1563]: 2025-07-15 05:09:08.070 [INFO][4204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" HandleID="k8s-pod-network.8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Workload="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.076 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7945779b74--nrtz2-eth0", GenerateName:"whisker-7945779b74-", Namespace:"calico-system", SelfLink:"", UID:"b3024758-bf49-4329-97db-e401827f9c09", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7945779b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7945779b74-nrtz2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b479b39622", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.076 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.077 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b479b39622 ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.082 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.097 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7945779b74--nrtz2-eth0", GenerateName:"whisker-7945779b74-", Namespace:"calico-system", SelfLink:"", UID:"b3024758-bf49-4329-97db-e401827f9c09", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7945779b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b", Pod:"whisker-7945779b74-nrtz2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b479b39622", MAC:"a6:30:55:02:66:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.135332 containerd[1563]: 2025-07-15 05:09:08.124 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" Namespace="calico-system" Pod="whisker-7945779b74-nrtz2" WorkloadEndpoint="localhost-k8s-whisker--7945779b74--nrtz2-eth0" Jul 15 05:09:08.179703 containerd[1563]: time="2025-07-15T05:09:08.178645771Z" level=info msg="connecting to shim 8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b" address="unix:///run/containerd/s/b3160c02f821b5b3a996702d20a806fc6b7cd244edfb7b278fd30494a17960e5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:08.221367 systemd[1]: Started cri-containerd-4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde.scope - libcontainer container 4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde. Jul 15 05:09:08.224491 systemd[1]: Started cri-containerd-8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa.scope - libcontainer container 8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa. Jul 15 05:09:08.234658 systemd-networkd[1481]: calie5fbe02abf9: Link UP Jul 15 05:09:08.235503 systemd-networkd[1481]: calie5fbe02abf9: Gained carrier Jul 15 05:09:08.259051 systemd[1]: Started cri-containerd-8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b.scope - libcontainer container 8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b. Jul 15 05:09:08.267638 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.593 [INFO][4155] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.610 [INFO][4155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0 calico-apiserver-5dd97f6948- calico-apiserver 7586dcae-21ff-4670-8ba2-42aabb2605ad 945 0 2025-07-15 05:08:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd97f6948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5dd97f6948-c8knc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5fbe02abf9 [] [] }} ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.610 [INFO][4155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.767 [INFO][4201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" HandleID="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Workload="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" HandleID="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Workload="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a88a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5dd97f6948-c8knc", "timestamp":"2025-07-15 05:09:07.76738931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:07.768 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.068 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.068 [INFO][4201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.088 [INFO][4201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.140 [INFO][4201] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.161 [INFO][4201] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.168 [INFO][4201] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.184 [INFO][4201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.184 [INFO][4201] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.190 [INFO][4201] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.202 [INFO][4201] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.216 [INFO][4201] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.216 [INFO][4201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" host="localhost" Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.216 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:08.278822 containerd[1563]: 2025-07-15 05:09:08.216 [INFO][4201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" HandleID="k8s-pod-network.6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Workload="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.228 [INFO][4155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0", GenerateName:"calico-apiserver-5dd97f6948-", Namespace:"calico-apiserver", SelfLink:"", UID:"7586dcae-21ff-4670-8ba2-42aabb2605ad", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd97f6948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5dd97f6948-c8knc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5fbe02abf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.229 [INFO][4155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.229 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5fbe02abf9 ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.245 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.246 [INFO][4155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0", GenerateName:"calico-apiserver-5dd97f6948-", Namespace:"calico-apiserver", SelfLink:"", UID:"7586dcae-21ff-4670-8ba2-42aabb2605ad", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd97f6948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b", Pod:"calico-apiserver-5dd97f6948-c8knc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5fbe02abf9", MAC:"da:0a:ba:57:81:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:08.279820 containerd[1563]: 2025-07-15 05:09:08.267 [INFO][4155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-c8knc" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--c8knc-eth0" Jul 15 05:09:08.290493 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:08.314282 containerd[1563]: time="2025-07-15T05:09:08.314157725Z" level=info msg="connecting to shim 6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b" address="unix:///run/containerd/s/74d009c28a07939dc01bece56d07ea17390c7448c137398608dce8034f8ec065" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:08.348601 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:08.363331 containerd[1563]: time="2025-07-15T05:09:08.360949621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\" id:\"8f8fc094424ced051bab7ec46aa176519a8dbb30f1978257a44fe58ed73256d5\" pid:4421 exit_status:1 exited_at:{seconds:1752556148 nanos:360446515}" Jul 15 05:09:08.363506 containerd[1563]: time="2025-07-15T05:09:08.363444837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ccf82,Uid:b9ac2d87-9174-4659-a668-21a00a83e356,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa\"" Jul 15 05:09:08.365251 containerd[1563]: time="2025-07-15T05:09:08.363844991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lt9vr,Uid:80af8562-af92-4803-83b3-1cf40e3ede5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde\"" Jul 15 05:09:08.366906 kubelet[2724]: E0715 05:09:08.366857 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:08.371268 containerd[1563]: time="2025-07-15T05:09:08.371205594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 05:09:08.378118 containerd[1563]: time="2025-07-15T05:09:08.378054885Z" level=info msg="CreateContainer within sandbox \"4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:09:08.389715 systemd[1]: Started cri-containerd-6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b.scope - libcontainer container 6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b. Jul 15 05:09:08.403096 containerd[1563]: time="2025-07-15T05:09:08.403035350Z" level=info msg="Container 350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:08.416705 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:08.661586 systemd-networkd[1481]: vxlan.calico: Link UP Jul 15 05:09:08.661601 systemd-networkd[1481]: vxlan.calico: Gained carrier Jul 15 05:09:08.744534 containerd[1563]: time="2025-07-15T05:09:08.744462173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7945779b74-nrtz2,Uid:b3024758-bf49-4329-97db-e401827f9c09,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b\"" Jul 15 05:09:08.844380 containerd[1563]: time="2025-07-15T05:09:08.844223351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-c8knc,Uid:7586dcae-21ff-4670-8ba2-42aabb2605ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b\"" Jul 15 05:09:09.033525 systemd-networkd[1481]: calib439eb94313: Gained IPv6LL Jul 15 05:09:09.130280 containerd[1563]: time="2025-07-15T05:09:09.130187834Z" level=info msg="CreateContainer within sandbox \"4b2f044fa5026cb6b0ca2a6159ac1128d0f5dc1f07305e23b54f40633f426dde\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3\"" Jul 15 05:09:09.137463 containerd[1563]: time="2025-07-15T05:09:09.137407352Z" level=info msg="StartContainer for \"350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3\"" Jul 15 05:09:09.138387 containerd[1563]: time="2025-07-15T05:09:09.138297659Z" level=info msg="connecting to shim 350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3" address="unix:///run/containerd/s/94bfeda44896430eb3e7093f66d0a1771bf28597d0b654aab3520b1bb4c78df1" protocol=ttrpc version=3 Jul 15 05:09:09.166499 systemd[1]: Started cri-containerd-350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3.scope - libcontainer container 350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3. Jul 15 05:09:09.215628 containerd[1563]: time="2025-07-15T05:09:09.215579380Z" level=info msg="StartContainer for \"350a82564074fb492ba30c04bf798f3391bbbbcfbcb3aedb96e1cb2d68ecc7a3\" returns successfully" Jul 15 05:09:09.225465 systemd-networkd[1481]: cali3b479b39622: Gained IPv6LL Jul 15 05:09:09.543765 kubelet[2724]: E0715 05:09:09.543688 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:09.673439 systemd-networkd[1481]: cali578cae8e7d4: Gained IPv6LL Jul 15 05:09:09.801401 systemd-networkd[1481]: calie5fbe02abf9: Gained IPv6LL Jul 15 05:09:10.142972 kubelet[2724]: E0715 05:09:10.142828 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:10.160480 kubelet[2724]: I0715 05:09:10.160400 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lt9vr" podStartSLOduration=64.160373554 podStartE2EDuration="1m4.160373554s" podCreationTimestamp="2025-07-15 05:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:09:10.15972894 +0000 UTC m=+68.855785563" watchObservedRunningTime="2025-07-15 05:09:10.160373554 +0000 UTC m=+68.856430167" Jul 15 05:09:10.360665 containerd[1563]: time="2025-07-15T05:09:10.360580436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:10.361333 containerd[1563]: time="2025-07-15T05:09:10.361256550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 05:09:10.362877 containerd[1563]: time="2025-07-15T05:09:10.362815799Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:10.365630 containerd[1563]: time="2025-07-15T05:09:10.365568808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:10.366489 containerd[1563]: time="2025-07-15T05:09:10.366448256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.995175967s" Jul 15 05:09:10.366489 containerd[1563]: time="2025-07-15T05:09:10.366480997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 05:09:10.367653 containerd[1563]: time="2025-07-15T05:09:10.367608463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 05:09:10.373364 containerd[1563]: time="2025-07-15T05:09:10.373304128Z" level=info msg="CreateContainer within sandbox \"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 05:09:10.377455 systemd-networkd[1481]: vxlan.calico: Gained IPv6LL Jul 15 05:09:10.428374 containerd[1563]: time="2025-07-15T05:09:10.427869098Z" level=info msg="Container 6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:10.453668 containerd[1563]: time="2025-07-15T05:09:10.453600860Z" level=info msg="CreateContainer within sandbox \"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb\"" Jul 15 05:09:10.456486 containerd[1563]: time="2025-07-15T05:09:10.454293856Z" level=info msg="StartContainer for \"6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb\"" Jul 15 05:09:10.456486 containerd[1563]: time="2025-07-15T05:09:10.455784556Z" level=info msg="connecting to shim 6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb" address="unix:///run/containerd/s/fd3ac80a8a47b4e344aea38fd6f0da7d1270c755c5335f1f53c4dcf6c04f2d59" protocol=ttrpc version=3 Jul 15 05:09:10.539569 systemd[1]: Started cri-containerd-6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb.scope - libcontainer container 6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb. Jul 15 05:09:10.592503 containerd[1563]: time="2025-07-15T05:09:10.592447864Z" level=info msg="StartContainer for \"6ea88a343d82b3b77dd5dc3c580e76b21c4faf26de76e6a68480b5700b1b47bb\" returns successfully" Jul 15 05:09:10.935988 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:40116.service - OpenSSH per-connection server daemon (10.0.0.1:40116). Jul 15 05:09:11.010825 sshd[4752]: Accepted publickey for core from 10.0.0.1 port 40116 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:11.012829 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:11.018320 systemd-logind[1524]: New session 10 of user core. Jul 15 05:09:11.024372 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:09:11.167417 kubelet[2724]: E0715 05:09:11.167372 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:11.200732 sshd[4755]: Connection closed by 10.0.0.1 port 40116 Jul 15 05:09:11.201050 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:11.205713 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:40116.service: Deactivated successfully. Jul 15 05:09:11.207845 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:09:11.208657 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:09:11.210008 systemd-logind[1524]: Removed session 10. Jul 15 05:09:12.152336 kubelet[2724]: E0715 05:09:12.152293 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:13.731003 containerd[1563]: time="2025-07-15T05:09:13.730821103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:13.740949 containerd[1563]: time="2025-07-15T05:09:13.740851529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 05:09:13.765838 containerd[1563]: time="2025-07-15T05:09:13.765734127Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:13.779279 containerd[1563]: time="2025-07-15T05:09:13.779160177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:13.780175 containerd[1563]: time="2025-07-15T05:09:13.780122675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.412474938s" Jul 15 05:09:13.780262 containerd[1563]: time="2025-07-15T05:09:13.780176526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 05:09:13.781492 containerd[1563]: time="2025-07-15T05:09:13.781458859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:09:13.813453 containerd[1563]: time="2025-07-15T05:09:13.813408335Z" level=info msg="CreateContainer within sandbox \"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 05:09:13.997936 containerd[1563]: time="2025-07-15T05:09:13.997881262Z" level=info msg="Container c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:14.174940 containerd[1563]: time="2025-07-15T05:09:14.174872800Z" level=info msg="CreateContainer within sandbox \"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d\"" Jul 15 05:09:14.175596 containerd[1563]: time="2025-07-15T05:09:14.175566630Z" level=info msg="StartContainer for \"c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d\"" Jul 15 05:09:14.176995 containerd[1563]: time="2025-07-15T05:09:14.176947501Z" level=info msg="connecting to shim c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d" address="unix:///run/containerd/s/b3160c02f821b5b3a996702d20a806fc6b7cd244edfb7b278fd30494a17960e5" protocol=ttrpc version=3 Jul 15 05:09:14.201540 systemd[1]: Started cri-containerd-c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d.scope - libcontainer container c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d. Jul 15 05:09:14.632250 containerd[1563]: time="2025-07-15T05:09:14.631931961Z" level=info msg="StartContainer for \"c776958dcc09de792eb0dc1fdf5ebdb53c11f7a4aa802752bcdc2390e95f794d\" returns successfully" Jul 15 05:09:16.217093 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Jul 15 05:09:16.279583 sshd[4816]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:16.282595 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:16.287421 systemd-logind[1524]: New session 11 of user core. Jul 15 05:09:16.303534 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:09:16.463300 sshd[4819]: Connection closed by 10.0.0.1 port 40128 Jul 15 05:09:16.463636 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:16.476360 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:40128.service: Deactivated successfully. Jul 15 05:09:16.478402 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:09:16.479353 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:09:16.481931 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Jul 15 05:09:16.482772 systemd-logind[1524]: Removed session 11. Jul 15 05:09:16.541939 kubelet[2724]: E0715 05:09:16.541876 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:16.544473 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:16.546842 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:16.552516 systemd-logind[1524]: New session 12 of user core. Jul 15 05:09:16.565489 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:09:16.941989 sshd[4836]: Connection closed by 10.0.0.1 port 40130 Jul 15 05:09:16.942560 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:16.953493 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:40130.service: Deactivated successfully. Jul 15 05:09:16.955920 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:09:16.956821 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:09:16.960171 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:40138.service - OpenSSH per-connection server daemon (10.0.0.1:40138). Jul 15 05:09:16.961515 systemd-logind[1524]: Removed session 12. Jul 15 05:09:17.024412 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 40138 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:17.026659 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:17.033463 systemd-logind[1524]: New session 13 of user core. Jul 15 05:09:17.040433 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:09:17.290563 sshd[4851]: Connection closed by 10.0.0.1 port 40138 Jul 15 05:09:17.290963 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:17.295295 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:40138.service: Deactivated successfully. Jul 15 05:09:17.297187 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:09:17.298047 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:09:17.299155 systemd-logind[1524]: Removed session 13. Jul 15 05:09:17.541842 containerd[1563]: time="2025-07-15T05:09:17.541704056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:17.542638 containerd[1563]: time="2025-07-15T05:09:17.541741027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,}" Jul 15 05:09:17.769631 systemd-networkd[1481]: calib4a3fc1f020: Link UP Jul 15 05:09:17.770830 systemd-networkd[1481]: calib4a3fc1f020: Gained carrier Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.665 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--zthqg-eth0 goldmane-768f4c5c69- calico-system 159a15a3-515e-4594-b1a8-5a98faa45752 944 0 2025-07-15 05:08:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-zthqg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib4a3fc1f020 [] [] }} ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.665 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.711 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" HandleID="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Workload="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.711 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" HandleID="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Workload="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001313b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-zthqg", "timestamp":"2025-07-15 05:09:17.711214894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.711 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.711 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.711 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.723 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.730 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.737 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.738 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.741 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.741 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.743 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80 Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.748 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.755 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.755 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" host="localhost" Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.755 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:17.797281 containerd[1563]: 2025-07-15 05:09:17.755 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" HandleID="k8s-pod-network.7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Workload="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.760 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zthqg-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"159a15a3-515e-4594-b1a8-5a98faa45752", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-zthqg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib4a3fc1f020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.760 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.760 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4a3fc1f020 ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.772 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.774 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zthqg-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"159a15a3-515e-4594-b1a8-5a98faa45752", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80", Pod:"goldmane-768f4c5c69-zthqg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib4a3fc1f020", MAC:"c6:ba:43:e1:cc:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:17.798098 containerd[1563]: 2025-07-15 05:09:17.789 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" Namespace="calico-system" Pod="goldmane-768f4c5c69-zthqg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zthqg-eth0" Jul 15 05:09:17.830519 containerd[1563]: time="2025-07-15T05:09:17.830452069Z" level=info msg="connecting to shim 7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80" address="unix:///run/containerd/s/2a7d22128523dede4b801880d89cc924fb10f34daad014b121036677e50a096e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:17.867507 systemd[1]: Started cri-containerd-7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80.scope - libcontainer container 7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80. Jul 15 05:09:17.885806 systemd-networkd[1481]: cali9fa5054e61c: Link UP Jul 15 05:09:17.887614 systemd-networkd[1481]: cali9fa5054e61c: Gained carrier Jul 15 05:09:17.903255 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.675 [INFO][4875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0 calico-apiserver-5dd97f6948- calico-apiserver 3b55c554-7db1-4ffb-9809-1daea65d2564 939 0 2025-07-15 05:08:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd97f6948 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5dd97f6948-9pczz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9fa5054e61c [] [] }} ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.675 [INFO][4875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.729 [INFO][4904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" HandleID="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Workload="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.729 [INFO][4904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" HandleID="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Workload="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5dd97f6948-9pczz", "timestamp":"2025-07-15 05:09:17.729129185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.729 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.755 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.756 [INFO][4904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.830 [INFO][4904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.841 [INFO][4904] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.848 [INFO][4904] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.851 [INFO][4904] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.853 [INFO][4904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.853 [INFO][4904] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.856 [INFO][4904] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.861 [INFO][4904] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.876 [INFO][4904] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.876 [INFO][4904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" host="localhost" Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.877 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:17.910540 containerd[1563]: 2025-07-15 05:09:17.877 [INFO][4904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" HandleID="k8s-pod-network.7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Workload="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.881 [INFO][4875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0", GenerateName:"calico-apiserver-5dd97f6948-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b55c554-7db1-4ffb-9809-1daea65d2564", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd97f6948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5dd97f6948-9pczz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fa5054e61c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.882 [INFO][4875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.882 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fa5054e61c ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.886 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.888 [INFO][4875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0", GenerateName:"calico-apiserver-5dd97f6948-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b55c554-7db1-4ffb-9809-1daea65d2564", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd97f6948", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd", Pod:"calico-apiserver-5dd97f6948-9pczz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fa5054e61c", MAC:"72:81:90:d7:18:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:17.911373 containerd[1563]: 2025-07-15 05:09:17.904 [INFO][4875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" Namespace="calico-apiserver" Pod="calico-apiserver-5dd97f6948-9pczz" WorkloadEndpoint="localhost-k8s-calico--apiserver--5dd97f6948--9pczz-eth0" Jul 15 05:09:17.951242 containerd[1563]: time="2025-07-15T05:09:17.951157674Z" level=info msg="connecting to shim 7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd" address="unix:///run/containerd/s/9796f891148a49f7b6bc3f031a44761f95763d1bc60dd252e331da92fb8eca0a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:17.963725 containerd[1563]: time="2025-07-15T05:09:17.952534892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zthqg,Uid:159a15a3-515e-4594-b1a8-5a98faa45752,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80\"" Jul 15 05:09:17.987522 systemd[1]: Started cri-containerd-7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd.scope - libcontainer container 7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd. Jul 15 05:09:18.008552 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:18.048292 containerd[1563]: time="2025-07-15T05:09:18.048112178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd97f6948-9pczz,Uid:3b55c554-7db1-4ffb-9809-1daea65d2564,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd\"" Jul 15 05:09:18.446524 containerd[1563]: time="2025-07-15T05:09:18.446371137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 05:09:18.454614 containerd[1563]: time="2025-07-15T05:09:18.454534243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:18.456699 containerd[1563]: time="2025-07-15T05:09:18.456672414Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:18.457649 containerd[1563]: time="2025-07-15T05:09:18.457604811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:18.458412 containerd[1563]: time="2025-07-15T05:09:18.458364640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.676879141s" Jul 15 05:09:18.458488 containerd[1563]: time="2025-07-15T05:09:18.458413703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:09:18.460548 containerd[1563]: time="2025-07-15T05:09:18.460487482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 05:09:18.466445 containerd[1563]: time="2025-07-15T05:09:18.466403600Z" level=info msg="CreateContainer within sandbox \"6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:09:18.476438 containerd[1563]: time="2025-07-15T05:09:18.476400200Z" level=info msg="Container 4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:18.488875 containerd[1563]: time="2025-07-15T05:09:18.488827965Z" level=info msg="CreateContainer within sandbox \"6f15d004b8ae20451714cbfa689d4aa84e8042dbc9bcf835cf80a8d3a015ca0b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d\"" Jul 15 05:09:18.489552 containerd[1563]: time="2025-07-15T05:09:18.489523763Z" level=info msg="StartContainer for \"4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d\"" Jul 15 05:09:18.490807 containerd[1563]: time="2025-07-15T05:09:18.490777809Z" level=info msg="connecting to shim 4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d" address="unix:///run/containerd/s/74d009c28a07939dc01bece56d07ea17390c7448c137398608dce8034f8ec065" protocol=ttrpc version=3 Jul 15 05:09:18.515433 systemd[1]: Started cri-containerd-4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d.scope - libcontainer container 4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d. Jul 15 05:09:18.540748 kubelet[2724]: E0715 05:09:18.540701 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:18.542109 containerd[1563]: time="2025-07-15T05:09:18.542010924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,}" Jul 15 05:09:18.572985 containerd[1563]: time="2025-07-15T05:09:18.572883714Z" level=info msg="StartContainer for \"4a9822aec04ae164a74641535332b6537990b079682f4187ff3f80ce1875523d\" returns successfully" Jul 15 05:09:18.677563 systemd-networkd[1481]: cali13bd3823ff0: Link UP Jul 15 05:09:18.678196 systemd-networkd[1481]: cali13bd3823ff0: Gained carrier Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.590 [INFO][5053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0 coredns-674b8bbfcf- kube-system c2b8bd3f-5cc6-4606-9465-ac2a98d9d525 943 0 2025-07-15 05:08:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wl9ww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali13bd3823ff0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.590 [INFO][5053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.628 [INFO][5079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" HandleID="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Workload="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.629 [INFO][5079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" HandleID="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Workload="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wl9ww", "timestamp":"2025-07-15 05:09:18.628833906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.629 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.629 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.629 [INFO][5079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.638 [INFO][5079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.644 [INFO][5079] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.649 [INFO][5079] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.650 [INFO][5079] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.652 [INFO][5079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.652 [INFO][5079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.655 [INFO][5079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3 Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.662 [INFO][5079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.669 [INFO][5079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.669 [INFO][5079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" host="localhost" Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.669 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:18.696721 containerd[1563]: 2025-07-15 05:09:18.669 [INFO][5079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" HandleID="k8s-pod-network.2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Workload="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.674 [INFO][5053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2b8bd3f-5cc6-4606-9465-ac2a98d9d525", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wl9ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13bd3823ff0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.674 [INFO][5053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.674 [INFO][5053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13bd3823ff0 ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.679 [INFO][5053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.680 [INFO][5053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2b8bd3f-5cc6-4606-9465-ac2a98d9d525", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3", Pod:"coredns-674b8bbfcf-wl9ww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali13bd3823ff0", MAC:"56:c6:c9:42:a0:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:18.697552 containerd[1563]: 2025-07-15 05:09:18.691 [INFO][5053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" Namespace="kube-system" Pod="coredns-674b8bbfcf-wl9ww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wl9ww-eth0" Jul 15 05:09:18.737056 containerd[1563]: time="2025-07-15T05:09:18.736997159Z" level=info msg="connecting to shim 2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3" address="unix:///run/containerd/s/e36bad0d360a3dc0163ea984bdc3397c4f8d858a553e3ad8e889ef361b67837c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:18.768650 systemd[1]: Started cri-containerd-2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3.scope - libcontainer container 2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3. Jul 15 05:09:18.786821 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:18.817007 containerd[1563]: time="2025-07-15T05:09:18.816947310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wl9ww,Uid:c2b8bd3f-5cc6-4606-9465-ac2a98d9d525,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3\"" Jul 15 05:09:18.818294 kubelet[2724]: E0715 05:09:18.818012 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:18.827469 containerd[1563]: time="2025-07-15T05:09:18.827411225Z" level=info msg="CreateContainer within sandbox \"2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:09:18.841171 containerd[1563]: time="2025-07-15T05:09:18.841109117Z" level=info msg="Container a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:19.017446 systemd-networkd[1481]: calib4a3fc1f020: Gained IPv6LL Jul 15 05:09:19.070605 containerd[1563]: time="2025-07-15T05:09:19.070546745Z" level=info msg="CreateContainer within sandbox \"2cae1040d0e909bb7a621a73181b9b165c16f9e5851b8f2f013977d385e88bb3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f\"" Jul 15 05:09:19.071199 containerd[1563]: time="2025-07-15T05:09:19.071154327Z" level=info msg="StartContainer for \"a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f\"" Jul 15 05:09:19.072022 containerd[1563]: time="2025-07-15T05:09:19.071991645Z" level=info msg="connecting to shim a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f" address="unix:///run/containerd/s/e36bad0d360a3dc0163ea984bdc3397c4f8d858a553e3ad8e889ef361b67837c" protocol=ttrpc version=3 Jul 15 05:09:19.105457 systemd[1]: Started cri-containerd-a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f.scope - libcontainer container a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f. Jul 15 05:09:19.285163 containerd[1563]: time="2025-07-15T05:09:19.285017880Z" level=info msg="StartContainer for \"a55eef315cb188615e21476d24a74e73cd9535223eda24e9129c4f320d36e48f\" returns successfully" Jul 15 05:09:19.288691 kubelet[2724]: E0715 05:09:19.288650 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:19.348277 kubelet[2724]: I0715 05:09:19.348162 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd97f6948-c8knc" podStartSLOduration=49.734479059 podStartE2EDuration="59.348139885s" podCreationTimestamp="2025-07-15 05:08:20 +0000 UTC" firstStartedPulling="2025-07-15 05:09:08.845602256 +0000 UTC m=+67.541658869" lastFinishedPulling="2025-07-15 05:09:18.459263062 +0000 UTC m=+77.155319695" observedRunningTime="2025-07-15 05:09:19.348003487 +0000 UTC m=+78.044060110" watchObservedRunningTime="2025-07-15 05:09:19.348139885 +0000 UTC m=+78.044196498" Jul 15 05:09:19.465577 systemd-networkd[1481]: cali9fa5054e61c: Gained IPv6LL Jul 15 05:09:19.541701 containerd[1563]: time="2025-07-15T05:09:19.541556980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,}" Jul 15 05:09:19.785389 systemd-networkd[1481]: cali13bd3823ff0: Gained IPv6LL Jul 15 05:09:19.942789 systemd-networkd[1481]: cali9aabf8bd49b: Link UP Jul 15 05:09:19.943218 systemd-networkd[1481]: cali9aabf8bd49b: Gained carrier Jul 15 05:09:19.964164 kubelet[2724]: I0715 05:09:19.963462 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wl9ww" podStartSLOduration=73.963435661 podStartE2EDuration="1m13.963435661s" podCreationTimestamp="2025-07-15 05:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:09:19.648620933 +0000 UTC m=+78.344677546" watchObservedRunningTime="2025-07-15 05:09:19.963435661 +0000 UTC m=+78.659492284" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.844 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0 calico-kube-controllers-649b8d6d45- calico-system f4760452-f2f4-4f60-96c0-98723b588bdd 941 0 2025-07-15 05:08:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:649b8d6d45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-649b8d6d45-t5xtk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9aabf8bd49b [] [] }} ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.844 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.883 [INFO][5206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" HandleID="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Workload="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.883 [INFO][5206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" HandleID="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Workload="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-649b8d6d45-t5xtk", "timestamp":"2025-07-15 05:09:19.883514491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.883 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.883 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.884 [INFO][5206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.893 [INFO][5206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.904 [INFO][5206] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.911 [INFO][5206] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.913 [INFO][5206] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.916 [INFO][5206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.917 [INFO][5206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.918 [INFO][5206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.926 [INFO][5206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.935 [INFO][5206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.935 [INFO][5206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" host="localhost" Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.935 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 05:09:19.968636 containerd[1563]: 2025-07-15 05:09:19.935 [INFO][5206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" HandleID="k8s-pod-network.09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Workload="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.939 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0", GenerateName:"calico-kube-controllers-649b8d6d45-", Namespace:"calico-system", SelfLink:"", UID:"f4760452-f2f4-4f60-96c0-98723b588bdd", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649b8d6d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-649b8d6d45-t5xtk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9aabf8bd49b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.940 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.940 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aabf8bd49b ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.942 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.943 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0", GenerateName:"calico-kube-controllers-649b8d6d45-", Namespace:"calico-system", SelfLink:"", UID:"f4760452-f2f4-4f60-96c0-98723b588bdd", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 5, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649b8d6d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be", Pod:"calico-kube-controllers-649b8d6d45-t5xtk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9aabf8bd49b", MAC:"76:e3:e6:d9:af:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 05:09:19.969959 containerd[1563]: 2025-07-15 05:09:19.964 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" Namespace="calico-system" Pod="calico-kube-controllers-649b8d6d45-t5xtk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--649b8d6d45--t5xtk-eth0" Jul 15 05:09:20.290312 kubelet[2724]: I0715 05:09:20.290223 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:09:20.290841 kubelet[2724]: E0715 05:09:20.290784 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:20.558839 containerd[1563]: time="2025-07-15T05:09:20.558692553Z" level=info msg="connecting to shim 09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be" address="unix:///run/containerd/s/09ae382b6ad785141b0e487d8791d0ab1621e16136899c9c241cfc0542fab38e" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:09:20.589460 systemd[1]: Started cri-containerd-09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be.scope - libcontainer container 09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be. Jul 15 05:09:20.604450 systemd-resolved[1401]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:09:20.839649 containerd[1563]: time="2025-07-15T05:09:20.839460356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649b8d6d45-t5xtk,Uid:f4760452-f2f4-4f60-96c0-98723b588bdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be\"" Jul 15 05:09:21.294930 kubelet[2724]: E0715 05:09:21.294481 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:21.444380 containerd[1563]: time="2025-07-15T05:09:21.444287032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:21.446097 containerd[1563]: time="2025-07-15T05:09:21.446036231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 05:09:21.449491 systemd-networkd[1481]: cali9aabf8bd49b: Gained IPv6LL Jul 15 05:09:21.547017 containerd[1563]: time="2025-07-15T05:09:21.546840388Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:21.577331 containerd[1563]: time="2025-07-15T05:09:21.577256506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:21.577772 containerd[1563]: time="2025-07-15T05:09:21.577709957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.11717294s" Jul 15 05:09:21.577772 containerd[1563]: time="2025-07-15T05:09:21.577750563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 05:09:21.581073 containerd[1563]: time="2025-07-15T05:09:21.580036170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 05:09:21.586520 containerd[1563]: time="2025-07-15T05:09:21.586461503Z" level=info msg="CreateContainer within sandbox \"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 05:09:21.600013 containerd[1563]: time="2025-07-15T05:09:21.599957554Z" level=info msg="Container 9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:21.616813 containerd[1563]: time="2025-07-15T05:09:21.616742375Z" level=info msg="CreateContainer within sandbox \"8ab272ddb7d8c8429b1a912a72ff8b5ab2ec92e4104ed4c8395a93cbea719cfa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a\"" Jul 15 05:09:21.617393 containerd[1563]: time="2025-07-15T05:09:21.617371047Z" level=info msg="StartContainer for \"9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a\"" Jul 15 05:09:21.618901 containerd[1563]: time="2025-07-15T05:09:21.618863940Z" level=info msg="connecting to shim 9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a" address="unix:///run/containerd/s/fd3ac80a8a47b4e344aea38fd6f0da7d1270c755c5335f1f53c4dcf6c04f2d59" protocol=ttrpc version=3 Jul 15 05:09:21.650508 systemd[1]: Started cri-containerd-9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a.scope - libcontainer container 9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a. Jul 15 05:09:21.770454 containerd[1563]: time="2025-07-15T05:09:21.770373727Z" level=info msg="StartContainer for \"9fbb0e7bdc898f8f29adddeebe531bf2e3a826614a2a92418de9e4d3dc686c1a\" returns successfully" Jul 15 05:09:22.311428 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:53792.service - OpenSSH per-connection server daemon (10.0.0.1:53792). Jul 15 05:09:22.342450 kubelet[2724]: I0715 05:09:22.341809 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ccf82" podStartSLOduration=40.128093089 podStartE2EDuration="53.341786137s" podCreationTimestamp="2025-07-15 05:08:29 +0000 UTC" firstStartedPulling="2025-07-15 05:09:08.365707306 +0000 UTC m=+67.061763919" lastFinishedPulling="2025-07-15 05:09:21.579400333 +0000 UTC m=+80.275456967" observedRunningTime="2025-07-15 05:09:22.341574716 +0000 UTC m=+81.037631359" watchObservedRunningTime="2025-07-15 05:09:22.341786137 +0000 UTC m=+81.037842760" Jul 15 05:09:22.380425 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 53792 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:22.382246 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:22.388548 systemd-logind[1524]: New session 14 of user core. Jul 15 05:09:22.393472 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:09:22.633377 kubelet[2724]: I0715 05:09:22.633138 2724 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 05:09:22.647564 kubelet[2724]: I0715 05:09:22.647460 2724 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 05:09:22.679791 sshd[5316]: Connection closed by 10.0.0.1 port 53792 Jul 15 05:09:22.680726 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:22.686338 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:53792.service: Deactivated successfully. Jul 15 05:09:22.688971 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:09:22.691998 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:09:22.694125 systemd-logind[1524]: Removed session 14. Jul 15 05:09:24.240032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028223159.mount: Deactivated successfully. Jul 15 05:09:24.279432 containerd[1563]: time="2025-07-15T05:09:24.279335032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:24.281599 containerd[1563]: time="2025-07-15T05:09:24.281538569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 05:09:24.283960 containerd[1563]: time="2025-07-15T05:09:24.283452506Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:24.287081 containerd[1563]: time="2025-07-15T05:09:24.287017882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:24.288307 containerd[1563]: time="2025-07-15T05:09:24.288173809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.708098382s" Jul 15 05:09:24.288307 containerd[1563]: time="2025-07-15T05:09:24.288263920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 05:09:24.289688 containerd[1563]: time="2025-07-15T05:09:24.289636798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 05:09:24.296855 containerd[1563]: time="2025-07-15T05:09:24.296794000Z" level=info msg="CreateContainer within sandbox \"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 05:09:24.310068 containerd[1563]: time="2025-07-15T05:09:24.310008883Z" level=info msg="Container c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:24.330306 containerd[1563]: time="2025-07-15T05:09:24.330218538Z" level=info msg="CreateContainer within sandbox \"8a676dc169ddfcfd279365aa7ccf68c79b7f2dbfe504e4480a1e86e2cf87fa7b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c\"" Jul 15 05:09:24.331171 containerd[1563]: time="2025-07-15T05:09:24.330847934Z" level=info msg="StartContainer for \"c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c\"" Jul 15 05:09:24.332698 containerd[1563]: time="2025-07-15T05:09:24.332644569Z" level=info msg="connecting to shim c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c" address="unix:///run/containerd/s/b3160c02f821b5b3a996702d20a806fc6b7cd244edfb7b278fd30494a17960e5" protocol=ttrpc version=3 Jul 15 05:09:24.377585 systemd[1]: Started cri-containerd-c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c.scope - libcontainer container c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c. Jul 15 05:09:24.451034 containerd[1563]: time="2025-07-15T05:09:24.450967727Z" level=info msg="StartContainer for \"c2aabba38b195833dac5e90d0e977f32c3bf4d709a4c8462a33ef3895998b95c\" returns successfully" Jul 15 05:09:25.541104 kubelet[2724]: E0715 05:09:25.541052 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:26.279051 kubelet[2724]: I0715 05:09:26.278975 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7945779b74-nrtz2" podStartSLOduration=3.735276768 podStartE2EDuration="19.278957569s" podCreationTimestamp="2025-07-15 05:09:07 +0000 UTC" firstStartedPulling="2025-07-15 05:09:08.745804098 +0000 UTC m=+67.441860711" lastFinishedPulling="2025-07-15 05:09:24.289484889 +0000 UTC m=+82.985541512" observedRunningTime="2025-07-15 05:09:26.277292173 +0000 UTC m=+84.973348806" watchObservedRunningTime="2025-07-15 05:09:26.278957569 +0000 UTC m=+84.975014183" Jul 15 05:09:27.696430 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:53808.service - OpenSSH per-connection server daemon (10.0.0.1:53808). Jul 15 05:09:27.766999 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:27.768743 sshd-session[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:27.773646 systemd-logind[1524]: New session 15 of user core. Jul 15 05:09:27.779370 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:09:27.902410 sshd[5383]: Connection closed by 10.0.0.1 port 53808 Jul 15 05:09:27.902737 sshd-session[5380]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:27.906749 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:53808.service: Deactivated successfully. Jul 15 05:09:27.908774 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:09:27.909696 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:09:27.911393 systemd-logind[1524]: Removed session 15. Jul 15 05:09:30.139607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863948448.mount: Deactivated successfully. Jul 15 05:09:32.541087 kubelet[2724]: E0715 05:09:32.541015 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:32.920470 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:51594.service - OpenSSH per-connection server daemon (10.0.0.1:51594). Jul 15 05:09:33.011173 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 51594 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:33.021199 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:33.026282 systemd-logind[1524]: New session 16 of user core. Jul 15 05:09:33.035389 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:09:33.270912 sshd[5416]: Connection closed by 10.0.0.1 port 51594 Jul 15 05:09:33.271523 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:33.277521 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:51594.service: Deactivated successfully. Jul 15 05:09:33.280667 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:09:33.283527 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:09:33.285424 systemd-logind[1524]: Removed session 16. Jul 15 05:09:34.188110 containerd[1563]: time="2025-07-15T05:09:34.188018498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:34.250605 containerd[1563]: time="2025-07-15T05:09:34.250522298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 05:09:34.305902 containerd[1563]: time="2025-07-15T05:09:34.305750738Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:34.342584 containerd[1563]: time="2025-07-15T05:09:34.342504550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:34.343633 containerd[1563]: time="2025-07-15T05:09:34.343598547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 10.053926922s" Jul 15 05:09:34.343726 containerd[1563]: time="2025-07-15T05:09:34.343637372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 05:09:34.344835 containerd[1563]: time="2025-07-15T05:09:34.344761356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 05:09:34.402761 containerd[1563]: time="2025-07-15T05:09:34.402676982Z" level=info msg="CreateContainer within sandbox \"7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 05:09:34.444326 containerd[1563]: time="2025-07-15T05:09:34.444141692Z" level=info msg="Container e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:34.461812 containerd[1563]: time="2025-07-15T05:09:34.461730500Z" level=info msg="CreateContainer within sandbox \"7aea09cc6f31ee7f41c49d537498085179f36da3c749b10e5af2753c9c81da80\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\"" Jul 15 05:09:34.463095 containerd[1563]: time="2025-07-15T05:09:34.463052812Z" level=info msg="StartContainer for \"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\"" Jul 15 05:09:34.464546 containerd[1563]: time="2025-07-15T05:09:34.464488462Z" level=info msg="connecting to shim e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236" address="unix:///run/containerd/s/2a7d22128523dede4b801880d89cc924fb10f34daad014b121036677e50a096e" protocol=ttrpc version=3 Jul 15 05:09:34.521610 systemd[1]: Started cri-containerd-e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236.scope - libcontainer container e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236. Jul 15 05:09:34.601939 containerd[1563]: time="2025-07-15T05:09:34.601872816Z" level=info msg="StartContainer for \"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\" returns successfully" Jul 15 05:09:34.928027 containerd[1563]: time="2025-07-15T05:09:34.927941237Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:34.929475 containerd[1563]: time="2025-07-15T05:09:34.929399429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 05:09:34.931942 containerd[1563]: time="2025-07-15T05:09:34.931861065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 587.059202ms" Jul 15 05:09:34.931942 containerd[1563]: time="2025-07-15T05:09:34.931931128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 05:09:34.933457 containerd[1563]: time="2025-07-15T05:09:34.933426852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 05:09:34.940266 containerd[1563]: time="2025-07-15T05:09:34.940173324Z" level=info msg="CreateContainer within sandbox \"7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 05:09:34.952546 containerd[1563]: time="2025-07-15T05:09:34.951371419Z" level=info msg="Container 3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:35.000147 containerd[1563]: time="2025-07-15T05:09:35.000092664Z" level=info msg="CreateContainer within sandbox \"7f450e64c88bb9dbe77141093f7ae6d70cfad6ccc8c1bce27e2f386034ef30fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5\"" Jul 15 05:09:35.000819 containerd[1563]: time="2025-07-15T05:09:35.000772762Z" level=info msg="StartContainer for \"3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5\"" Jul 15 05:09:35.002379 containerd[1563]: time="2025-07-15T05:09:35.002331305Z" level=info msg="connecting to shim 3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5" address="unix:///run/containerd/s/9796f891148a49f7b6bc3f031a44761f95763d1bc60dd252e331da92fb8eca0a" protocol=ttrpc version=3 Jul 15 05:09:35.029550 systemd[1]: Started cri-containerd-3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5.scope - libcontainer container 3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5. Jul 15 05:09:35.266421 containerd[1563]: time="2025-07-15T05:09:35.266366201Z" level=info msg="StartContainer for \"3d5cd941a24a27500a9067ce0582343762b0773ea629e044a400910ec755b5f5\" returns successfully" Jul 15 05:09:35.411946 kubelet[2724]: I0715 05:09:35.411858 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-zthqg" podStartSLOduration=51.03358767 podStartE2EDuration="1m7.411836663s" podCreationTimestamp="2025-07-15 05:08:28 +0000 UTC" firstStartedPulling="2025-07-15 05:09:17.966279628 +0000 UTC m=+76.662336241" lastFinishedPulling="2025-07-15 05:09:34.344528621 +0000 UTC m=+93.040585234" observedRunningTime="2025-07-15 05:09:35.372415908 +0000 UTC m=+94.068472521" watchObservedRunningTime="2025-07-15 05:09:35.411836663 +0000 UTC m=+94.107893277" Jul 15 05:09:35.531586 containerd[1563]: time="2025-07-15T05:09:35.531169131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\" id:\"aea2ace6f734dafa05bfab1b6cbc8b8eeacd74469fe5e3707f5e32f3c5fc2199\" pid:5515 exit_status:1 exited_at:{seconds:1752556175 nanos:530512037}" Jul 15 05:09:36.134529 kubelet[2724]: I0715 05:09:36.134371 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 05:09:36.429573 containerd[1563]: time="2025-07-15T05:09:36.429435046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\" id:\"b8f4b2bc5f3595c7b654f0586ea29b45ab98b318939223880febbf5d58465c0a\" pid:5543 exit_status:1 exited_at:{seconds:1752556176 nanos:429052084}" Jul 15 05:09:36.921080 kubelet[2724]: I0715 05:09:36.920998 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd97f6948-9pczz" podStartSLOduration=60.037489192 podStartE2EDuration="1m16.920973523s" podCreationTimestamp="2025-07-15 05:08:20 +0000 UTC" firstStartedPulling="2025-07-15 05:09:18.04944941 +0000 UTC m=+76.745506034" lastFinishedPulling="2025-07-15 05:09:34.932933752 +0000 UTC m=+93.628990365" observedRunningTime="2025-07-15 05:09:35.413292401 +0000 UTC m=+94.109349034" watchObservedRunningTime="2025-07-15 05:09:36.920973523 +0000 UTC m=+95.617030136" Jul 15 05:09:37.427731 containerd[1563]: time="2025-07-15T05:09:37.427678147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\" id:\"3cc1cc3834e0a0cd6ec7e649433df65344847b1b6ae3e10d5f55de41a5941ab1\" pid:5568 exit_status:1 exited_at:{seconds:1752556177 nanos:427323821}" Jul 15 05:09:38.247958 containerd[1563]: time="2025-07-15T05:09:38.247812081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d19ec1ae1c172bc58126d3be76ab65b9a9610ab761ce5b463456b5b3662a6f1e\" id:\"4e7d8ae7f3676287b434cc26873f560a95610a65533c017bd3c371b3dd988bad\" pid:5592 exited_at:{seconds:1752556178 nanos:247389594}" Jul 15 05:09:38.290581 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). Jul 15 05:09:38.401212 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:38.416523 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:38.424086 systemd-logind[1524]: New session 17 of user core. Jul 15 05:09:38.436588 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:09:38.607334 sshd[5609]: Connection closed by 10.0.0.1 port 33902 Jul 15 05:09:38.605760 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:38.612551 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:33902.service: Deactivated successfully. Jul 15 05:09:38.615147 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:09:38.616952 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:09:38.618538 systemd-logind[1524]: Removed session 17. Jul 15 05:09:41.883987 containerd[1563]: time="2025-07-15T05:09:41.883902165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:41.888051 containerd[1563]: time="2025-07-15T05:09:41.887139845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 05:09:41.889423 containerd[1563]: time="2025-07-15T05:09:41.889375751Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:41.898461 containerd[1563]: time="2025-07-15T05:09:41.898348045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:09:41.899589 containerd[1563]: time="2025-07-15T05:09:41.899441114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.96596625s" Jul 15 05:09:41.899589 containerd[1563]: time="2025-07-15T05:09:41.899508353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 05:09:41.921022 containerd[1563]: time="2025-07-15T05:09:41.920928966Z" level=info msg="CreateContainer within sandbox \"09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 05:09:41.963802 containerd[1563]: time="2025-07-15T05:09:41.962819076Z" level=info msg="Container 25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:09:41.980745 containerd[1563]: time="2025-07-15T05:09:41.980678982Z" level=info msg="CreateContainer within sandbox \"09eae8b994871cc75f66d321c6bfdbd507341f3286b6219b898aa318dd1129be\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198\"" Jul 15 05:09:41.981555 containerd[1563]: time="2025-07-15T05:09:41.981515653Z" level=info msg="StartContainer for \"25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198\"" Jul 15 05:09:41.983125 containerd[1563]: time="2025-07-15T05:09:41.983081676Z" level=info msg="connecting to shim 25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198" address="unix:///run/containerd/s/09ae382b6ad785141b0e487d8791d0ab1621e16136899c9c241cfc0542fab38e" protocol=ttrpc version=3 Jul 15 05:09:42.031586 systemd[1]: Started cri-containerd-25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198.scope - libcontainer container 25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198. Jul 15 05:09:42.223383 containerd[1563]: time="2025-07-15T05:09:42.221860848Z" level=info msg="StartContainer for \"25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198\" returns successfully" Jul 15 05:09:42.421926 containerd[1563]: time="2025-07-15T05:09:42.421754534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198\" id:\"0111595d7c773f097cd10781e449a5eecbbf606dbe85274e7b4846ffbe6655ba\" pid:5693 exit_status:1 exited_at:{seconds:1752556182 nanos:420934125}" Jul 15 05:09:43.419204 containerd[1563]: time="2025-07-15T05:09:43.419134365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25ff0323d61a42d458c05ce4c857c42fcc237f03331dc93510144df883fe8198\" id:\"ac1d53d2173b8d907d5763c77764ca48918ad8296eb6e6642f9a62d53977850c\" pid:5715 exited_at:{seconds:1752556183 nanos:418853127}" Jul 15 05:09:43.435745 kubelet[2724]: I0715 05:09:43.435652 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-649b8d6d45-t5xtk" podStartSLOduration=51.375904057 podStartE2EDuration="1m12.435631292s" podCreationTimestamp="2025-07-15 05:08:31 +0000 UTC" firstStartedPulling="2025-07-15 05:09:20.840795297 +0000 UTC m=+79.536851910" lastFinishedPulling="2025-07-15 05:09:41.900522532 +0000 UTC m=+100.596579145" observedRunningTime="2025-07-15 05:09:42.38905537 +0000 UTC m=+101.085111993" watchObservedRunningTime="2025-07-15 05:09:43.435631292 +0000 UTC m=+102.131687915" Jul 15 05:09:43.623727 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:33918.service - OpenSSH per-connection server daemon (10.0.0.1:33918). Jul 15 05:09:43.702938 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 33918 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:43.705295 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:43.711424 systemd-logind[1524]: New session 18 of user core. Jul 15 05:09:43.719427 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:09:43.934676 sshd[5730]: Connection closed by 10.0.0.1 port 33918 Jul 15 05:09:43.935120 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:43.948260 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:33918.service: Deactivated successfully. Jul 15 05:09:43.950785 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:09:43.951745 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:09:43.955124 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:33928.service - OpenSSH per-connection server daemon (10.0.0.1:33928). Jul 15 05:09:43.956111 systemd-logind[1524]: Removed session 18. Jul 15 05:09:44.017190 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 33928 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:44.019585 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:44.025056 systemd-logind[1524]: New session 19 of user core. Jul 15 05:09:44.033710 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:09:44.798924 sshd[5747]: Connection closed by 10.0.0.1 port 33928 Jul 15 05:09:44.799416 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:44.812543 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:33928.service: Deactivated successfully. Jul 15 05:09:44.814863 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:09:44.815752 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:09:44.818819 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Jul 15 05:09:44.820211 systemd-logind[1524]: Removed session 19. Jul 15 05:09:44.898852 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:44.901132 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:44.906342 systemd-logind[1524]: New session 20 of user core. Jul 15 05:09:44.913418 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:09:45.832274 sshd[5761]: Connection closed by 10.0.0.1 port 33942 Jul 15 05:09:45.832740 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:45.846543 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:33942.service: Deactivated successfully. Jul 15 05:09:45.850217 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:09:45.852754 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:09:45.858355 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:33954.service - OpenSSH per-connection server daemon (10.0.0.1:33954). Jul 15 05:09:45.859342 systemd-logind[1524]: Removed session 20. Jul 15 05:09:45.931125 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 33954 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:45.933482 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:45.940305 systemd-logind[1524]: New session 21 of user core. Jul 15 05:09:45.950584 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:09:46.370257 sshd[5789]: Connection closed by 10.0.0.1 port 33954 Jul 15 05:09:46.372331 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:46.387317 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:33954.service: Deactivated successfully. Jul 15 05:09:46.391634 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:09:46.396913 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:09:46.403390 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:33958.service - OpenSSH per-connection server daemon (10.0.0.1:33958). Jul 15 05:09:46.406037 systemd-logind[1524]: Removed session 21. Jul 15 05:09:46.475304 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 33958 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:46.477187 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:46.482765 systemd-logind[1524]: New session 22 of user core. Jul 15 05:09:46.490427 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:09:46.610764 sshd[5804]: Connection closed by 10.0.0.1 port 33958 Jul 15 05:09:46.611263 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:46.616585 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:33958.service: Deactivated successfully. Jul 15 05:09:46.619839 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:09:46.621629 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:09:46.623829 systemd-logind[1524]: Removed session 22. Jul 15 05:09:47.081621 containerd[1563]: time="2025-07-15T05:09:47.081460217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3713953dd023ae21000aa06aeaae9cce906691e724bd888b9eb3f7095aa0236\" id:\"05827a6950771d77ed9d4f2f8e634928dd0a1ff28e8592f4bcb8167d87e07e8b\" pid:5828 exited_at:{seconds:1752556187 nanos:80869867}" Jul 15 05:09:51.629292 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:43396.service - OpenSSH per-connection server daemon (10.0.0.1:43396). Jul 15 05:09:51.684800 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 43396 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:51.686751 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:51.691708 systemd-logind[1524]: New session 23 of user core. Jul 15 05:09:51.698370 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:09:51.820938 sshd[5851]: Connection closed by 10.0.0.1 port 43396 Jul 15 05:09:51.821383 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:51.826319 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:43396.service: Deactivated successfully. Jul 15 05:09:51.828416 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:09:51.829328 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:09:51.830763 systemd-logind[1524]: Removed session 23. Jul 15 05:09:53.544424 kubelet[2724]: E0715 05:09:53.544377 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 05:09:56.851372 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:43412.service - OpenSSH per-connection server daemon (10.0.0.1:43412). Jul 15 05:09:56.938803 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 43412 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:09:56.940811 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:09:56.947365 systemd-logind[1524]: New session 24 of user core. Jul 15 05:09:56.957539 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:09:57.094655 sshd[5869]: Connection closed by 10.0.0.1 port 43412 Jul 15 05:09:57.095049 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jul 15 05:09:57.102841 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:43412.service: Deactivated successfully. Jul 15 05:09:57.105957 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:09:57.107014 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:09:57.109696 systemd-logind[1524]: Removed session 24. Jul 15 05:10:02.116451 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:40950.service - OpenSSH per-connection server daemon (10.0.0.1:40950). Jul 15 05:10:02.221675 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 40950 ssh2: RSA SHA256:xQteBGu1K6SjT/ucc5Duk9MfMFesvWUUvdc6KRmollo Jul 15 05:10:02.223684 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:10:02.232110 systemd-logind[1524]: New session 25 of user core. Jul 15 05:10:02.239564 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:10:02.963132 sshd[5891]: Connection closed by 10.0.0.1 port 40950 Jul 15 05:10:02.964650 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Jul 15 05:10:02.970000 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:10:02.971136 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:40950.service: Deactivated successfully. Jul 15 05:10:02.973856 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:10:02.975999 systemd-logind[1524]: Removed session 25.