Apr 30 03:33:15.028862 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:33:15.028893 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:33:15.028904 kernel: BIOS-provided physical RAM map: Apr 30 03:33:15.028911 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:33:15.028916 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:33:15.028923 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:33:15.028930 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 30 03:33:15.028936 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 30 03:33:15.028943 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:33:15.028951 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:33:15.028957 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:33:15.028963 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:33:15.028970 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:33:15.028976 kernel: NX (Execute Disable) protection: active Apr 30 03:33:15.028984 kernel: APIC: Static calls initialized Apr 30 03:33:15.028993 kernel: SMBIOS 2.8 present. Apr 30 03:33:15.029000 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 30 03:33:15.029006 kernel: Hypervisor detected: KVM Apr 30 03:33:15.029013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:33:15.029020 kernel: kvm-clock: using sched offset of 2745456121 cycles Apr 30 03:33:15.029027 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:33:15.029034 kernel: tsc: Detected 2794.748 MHz processor Apr 30 03:33:15.029041 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:33:15.029049 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:33:15.029058 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Apr 30 03:33:15.029065 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:33:15.029072 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:33:15.029078 kernel: Using GB pages for direct mapping Apr 30 03:33:15.029085 kernel: ACPI: Early table checksum verification disabled Apr 30 03:33:15.029092 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 30 03:33:15.029099 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029106 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029113 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029122 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 30 03:33:15.029129 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029136 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029143 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029150 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:33:15.029156 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Apr 30 03:33:15.029163 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Apr 30 03:33:15.029174 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 30 03:33:15.029183 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Apr 30 03:33:15.029190 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Apr 30 03:33:15.029197 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Apr 30 03:33:15.029204 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Apr 30 03:33:15.029211 kernel: No NUMA configuration found Apr 30 03:33:15.029218 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 30 03:33:15.029228 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 30 03:33:15.029235 kernel: Zone ranges: Apr 30 03:33:15.029242 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:33:15.029249 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 30 03:33:15.029256 kernel: Normal empty Apr 30 03:33:15.029263 kernel: Movable zone start for each node Apr 30 03:33:15.029270 kernel: Early memory node ranges Apr 30 03:33:15.029277 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:33:15.029284 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 30 03:33:15.029292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 30 03:33:15.029301 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:33:15.029308 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:33:15.029315 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:33:15.029322 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:33:15.029329 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:33:15.029336 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:33:15.029344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:33:15.029351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:33:15.029358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:33:15.029367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:33:15.029375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:33:15.029382 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:33:15.029389 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:33:15.029396 kernel: TSC deadline timer available Apr 30 03:33:15.029403 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 03:33:15.029410 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:33:15.029417 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 03:33:15.029424 kernel: kvm-guest: setup PV sched yield Apr 30 03:33:15.029434 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:33:15.029441 kernel: Booting paravirtualized kernel on KVM Apr 30 03:33:15.029448 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:33:15.029455 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 03:33:15.029462 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 03:33:15.029470 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 03:33:15.029476 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 03:33:15.029483 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:33:15.029491 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:33:15.029501 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:33:15.029509 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:33:15.029516 kernel: random: crng init done Apr 30 03:33:15.029523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:33:15.029531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:33:15.029538 kernel: Fallback order for Node 0: 0 Apr 30 03:33:15.029545 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 30 03:33:15.029552 kernel: Policy zone: DMA32 Apr 30 03:33:15.029562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:33:15.029569 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136900K reserved, 0K cma-reserved) Apr 30 03:33:15.029576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 03:33:15.029584 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:33:15.029591 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:33:15.029598 kernel: Dynamic Preempt: voluntary Apr 30 03:33:15.029637 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:33:15.029654 kernel: rcu: RCU event tracing is enabled. Apr 30 03:33:15.029662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 03:33:15.029674 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:33:15.029681 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:33:15.029688 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:33:15.029696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:33:15.029703 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 03:33:15.029710 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 03:33:15.029718 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:33:15.029725 kernel: Console: colour VGA+ 80x25 Apr 30 03:33:15.029732 kernel: printk: console [ttyS0] enabled Apr 30 03:33:15.029742 kernel: ACPI: Core revision 20230628 Apr 30 03:33:15.029759 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:33:15.029766 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:33:15.029775 kernel: x2apic enabled Apr 30 03:33:15.029785 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:33:15.029795 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 03:33:15.029805 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 03:33:15.029816 kernel: kvm-guest: setup PV IPIs Apr 30 03:33:15.029838 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:33:15.029847 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:33:15.029854 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 03:33:15.029862 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:33:15.029871 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:33:15.029879 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:33:15.029886 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:33:15.029894 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:33:15.029901 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:33:15.029912 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:33:15.029919 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:33:15.029927 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:33:15.029934 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:33:15.029942 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:33:15.029949 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 03:33:15.029957 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 03:33:15.029965 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 03:33:15.029975 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:33:15.029982 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:33:15.029990 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:33:15.029997 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:33:15.030005 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:33:15.030012 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:33:15.030020 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:33:15.030027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:33:15.030034 kernel: landlock: Up and running. Apr 30 03:33:15.030044 kernel: SELinux: Initializing. Apr 30 03:33:15.030052 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:33:15.030059 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:33:15.030067 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:33:15.030074 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:33:15.030082 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:33:15.030090 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:33:15.030098 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:33:15.030105 kernel: ... version: 0 Apr 30 03:33:15.030116 kernel: ... bit width: 48 Apr 30 03:33:15.030123 kernel: ... generic registers: 6 Apr 30 03:33:15.030130 kernel: ... value mask: 0000ffffffffffff Apr 30 03:33:15.030138 kernel: ... max period: 00007fffffffffff Apr 30 03:33:15.030145 kernel: ... fixed-purpose events: 0 Apr 30 03:33:15.030153 kernel: ... event mask: 000000000000003f Apr 30 03:33:15.030160 kernel: signal: max sigframe size: 1776 Apr 30 03:33:15.030168 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:33:15.030175 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:33:15.030185 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:33:15.030193 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:33:15.030200 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 03:33:15.030207 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 03:33:15.030215 kernel: smpboot: Max logical packages: 1 Apr 30 03:33:15.030222 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 03:33:15.030230 kernel: devtmpfs: initialized Apr 30 03:33:15.030237 kernel: x86/mm: Memory block size: 128MB Apr 30 03:33:15.030245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:33:15.030255 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 03:33:15.030262 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:33:15.030270 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:33:15.030277 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:33:15.030285 kernel: audit: type=2000 audit(1745983994.177:1): state=initialized audit_enabled=0 res=1 Apr 30 03:33:15.030292 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:33:15.030300 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:33:15.030307 kernel: cpuidle: using governor menu Apr 30 03:33:15.030315 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:33:15.030325 kernel: dca service started, version 1.12.1 Apr 30 03:33:15.030332 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:33:15.030340 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 03:33:15.030347 kernel: PCI: Using configuration type 1 for base access Apr 30 03:33:15.030355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:33:15.030363 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:33:15.030370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:33:15.030378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:33:15.030385 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:33:15.030395 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:33:15.030403 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:33:15.030410 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:33:15.030417 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:33:15.030425 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:33:15.030432 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:33:15.030440 kernel: ACPI: Interpreter enabled Apr 30 03:33:15.030447 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:33:15.030454 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:33:15.030464 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:33:15.030472 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:33:15.030480 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:33:15.030487 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:33:15.030705 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:33:15.030849 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:33:15.030972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:33:15.030986 kernel: PCI host bridge to bus 0000:00 Apr 30 03:33:15.031119 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:33:15.031233 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:33:15.031345 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:33:15.031485 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 30 03:33:15.031708 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:33:15.031836 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:33:15.031953 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:33:15.032091 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:33:15.032224 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 03:33:15.032345 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 30 03:33:15.032464 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 30 03:33:15.032583 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 30 03:33:15.032718 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:33:15.032864 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:33:15.032985 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 30 03:33:15.033104 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 30 03:33:15.033222 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 30 03:33:15.033367 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:33:15.033487 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:33:15.033623 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 30 03:33:15.033759 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 30 03:33:15.033889 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:33:15.034009 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 30 03:33:15.034128 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 30 03:33:15.034247 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 30 03:33:15.034366 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 30 03:33:15.034493 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:33:15.034635 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:33:15.034775 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:33:15.034897 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 30 03:33:15.035025 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 30 03:33:15.035162 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:33:15.035281 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:33:15.035296 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:33:15.035304 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:33:15.035312 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:33:15.035320 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:33:15.035327 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:33:15.035335 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:33:15.035343 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:33:15.035351 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:33:15.035358 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:33:15.035369 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:33:15.035376 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:33:15.035384 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:33:15.035392 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:33:15.035400 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:33:15.035408 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:33:15.035415 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:33:15.035423 kernel: iommu: Default domain type: Translated Apr 30 03:33:15.035431 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:33:15.035441 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:33:15.035449 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:33:15.035457 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:33:15.035464 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 30 03:33:15.035583 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:33:15.035776 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:33:15.035901 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:33:15.035912 kernel: vgaarb: loaded Apr 30 03:33:15.035924 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:33:15.035932 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:33:15.035940 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:33:15.035947 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:33:15.035955 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:33:15.035963 kernel: pnp: PnP ACPI init Apr 30 03:33:15.036122 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:33:15.036135 kernel: pnp: PnP ACPI: found 6 devices Apr 30 03:33:15.036147 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:33:15.036155 kernel: NET: Registered PF_INET protocol family Apr 30 03:33:15.036162 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:33:15.036170 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 03:33:15.036178 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:33:15.036185 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:33:15.036193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 03:33:15.036201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 03:33:15.036208 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:33:15.036219 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:33:15.036226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:33:15.036234 kernel: NET: Registered PF_XDP protocol family Apr 30 03:33:15.036345 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:33:15.036455 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:33:15.036564 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:33:15.036690 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 30 03:33:15.036810 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:33:15.036924 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:33:15.036934 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:33:15.036942 kernel: Initialise system trusted keyrings Apr 30 03:33:15.036950 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 03:33:15.036958 kernel: Key type asymmetric registered Apr 30 03:33:15.036965 kernel: Asymmetric key parser 'x509' registered Apr 30 03:33:15.036973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:33:15.036981 kernel: io scheduler mq-deadline registered Apr 30 03:33:15.036989 kernel: io scheduler kyber registered Apr 30 03:33:15.036996 kernel: io scheduler bfq registered Apr 30 03:33:15.037007 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:33:15.037015 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:33:15.037023 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:33:15.037031 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 03:33:15.037039 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:33:15.037046 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:33:15.037054 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:33:15.037062 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:33:15.037070 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:33:15.037080 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:33:15.037220 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 03:33:15.037341 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 03:33:15.037455 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T03:33:14 UTC (1745983994) Apr 30 03:33:15.037570 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:33:15.037581 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:33:15.037589 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:33:15.037665 kernel: Segment Routing with IPv6 Apr 30 03:33:15.037674 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:33:15.037682 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:33:15.037690 kernel: Key type dns_resolver registered Apr 30 03:33:15.037697 kernel: IPI shorthand broadcast: enabled Apr 30 03:33:15.037705 kernel: sched_clock: Marking stable (907004709, 134771156)->(1076670406, -34894541) Apr 30 03:33:15.037713 kernel: registered taskstats version 1 Apr 30 03:33:15.037720 kernel: Loading compiled-in X.509 certificates Apr 30 03:33:15.037728 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:33:15.037739 kernel: Key type .fscrypt registered Apr 30 03:33:15.037754 kernel: Key type fscrypt-provisioning registered Apr 30 03:33:15.037762 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:33:15.037770 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:33:15.037778 kernel: ima: No architecture policies found Apr 30 03:33:15.037786 kernel: clk: Disabling unused clocks Apr 30 03:33:15.037793 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:33:15.037801 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:33:15.037808 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:33:15.037818 kernel: Run /init as init process Apr 30 03:33:15.037826 kernel: with arguments: Apr 30 03:33:15.037833 kernel: /init Apr 30 03:33:15.037841 kernel: with environment: Apr 30 03:33:15.037848 kernel: HOME=/ Apr 30 03:33:15.037855 kernel: TERM=linux Apr 30 03:33:15.037863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:33:15.037875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:33:15.037887 systemd[1]: Detected virtualization kvm. Apr 30 03:33:15.037896 systemd[1]: Detected architecture x86-64. Apr 30 03:33:15.037904 systemd[1]: Running in initrd. Apr 30 03:33:15.037912 systemd[1]: No hostname configured, using default hostname. Apr 30 03:33:15.037919 systemd[1]: Hostname set to . Apr 30 03:33:15.037928 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:33:15.037936 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:33:15.037944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:33:15.037955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:33:15.037964 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:33:15.037984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:33:15.037995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:33:15.038003 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:33:15.038016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:33:15.038025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:33:15.038033 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:33:15.038042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:33:15.038050 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:33:15.038059 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:33:15.038067 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:33:15.038076 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:33:15.038086 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:33:15.038095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:33:15.038103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:33:15.038112 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:33:15.038120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:33:15.038129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:33:15.038137 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:33:15.038145 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:33:15.038154 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:33:15.038165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:33:15.038173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:33:15.038182 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:33:15.038190 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:33:15.038198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:33:15.038207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:33:15.038215 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:33:15.038224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:33:15.038234 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:33:15.038261 systemd-journald[193]: Collecting audit messages is disabled. Apr 30 03:33:15.038282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:33:15.038295 systemd-journald[193]: Journal started Apr 30 03:33:15.038322 systemd-journald[193]: Runtime Journal (/run/log/journal/3d32e33ec853497b941f2ec5a7044fba) is 6.0M, max 48.4M, 42.3M free. Apr 30 03:33:15.024460 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 03:33:15.061335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:33:15.061353 kernel: Bridge firewalling registered Apr 30 03:33:15.052726 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 03:33:15.065879 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:33:15.066384 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:33:15.070870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:33:15.073433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:33:15.092968 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:33:15.096882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:33:15.100388 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:33:15.107342 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:33:15.114088 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:33:15.132876 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:33:15.133852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:33:15.135703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:33:15.140103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:33:15.143928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:33:15.158298 dracut-cmdline[226]: dracut-dracut-053 Apr 30 03:33:15.162080 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:33:15.181522 systemd-resolved[230]: Positive Trust Anchors: Apr 30 03:33:15.181541 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:33:15.181576 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:33:15.184870 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 30 03:33:15.186292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:33:15.195093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:33:15.257657 kernel: SCSI subsystem initialized Apr 30 03:33:15.269666 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:33:15.282640 kernel: iscsi: registered transport (tcp) Apr 30 03:33:15.305662 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:33:15.305762 kernel: QLogic iSCSI HBA Driver Apr 30 03:33:15.359435 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:33:15.375767 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:33:15.415110 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:33:15.415185 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:33:15.416176 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:33:15.458637 kernel: raid6: avx2x4 gen() 28498 MB/s Apr 30 03:33:15.475640 kernel: raid6: avx2x2 gen() 27382 MB/s Apr 30 03:33:15.492807 kernel: raid6: avx2x1 gen() 21946 MB/s Apr 30 03:33:15.492894 kernel: raid6: using algorithm avx2x4 gen() 28498 MB/s Apr 30 03:33:15.510803 kernel: raid6: .... xor() 7010 MB/s, rmw enabled Apr 30 03:33:15.510861 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:33:15.531630 kernel: xor: automatically using best checksumming function avx Apr 30 03:33:15.692638 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:33:15.708773 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:33:15.721910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:33:15.734306 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 30 03:33:15.738962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:33:15.753841 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:33:15.773585 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Apr 30 03:33:15.811717 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:33:15.820945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:33:15.890211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:33:15.904208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:33:15.915839 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:33:15.917803 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:33:15.920712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:33:15.920977 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:33:15.929807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:33:15.938216 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 03:33:15.956470 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 03:33:15.956646 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:33:15.956659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:33:15.956670 kernel: GPT:9289727 != 19775487 Apr 30 03:33:15.956681 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:33:15.956691 kernel: GPT:9289727 != 19775487 Apr 30 03:33:15.956712 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:33:15.956749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:33:15.939988 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:33:15.961621 kernel: libata version 3.00 loaded. Apr 30 03:33:15.968680 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:33:16.005047 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:33:16.005072 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:33:16.005233 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:33:16.005388 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:33:16.005400 kernel: AES CTR mode by8 optimization enabled Apr 30 03:33:16.005410 kernel: scsi host0: ahci Apr 30 03:33:16.005573 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (468) Apr 30 03:33:16.005585 kernel: scsi host1: ahci Apr 30 03:33:16.005766 kernel: scsi host2: ahci Apr 30 03:33:16.005935 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Apr 30 03:33:16.005951 kernel: scsi host3: ahci Apr 30 03:33:16.006105 kernel: scsi host4: ahci Apr 30 03:33:16.006251 kernel: scsi host5: ahci Apr 30 03:33:16.006397 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 30 03:33:16.006408 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 30 03:33:16.006419 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 30 03:33:16.006429 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 30 03:33:16.006439 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 30 03:33:16.006453 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 30 03:33:15.972782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:33:15.972912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:33:15.974458 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:33:15.977046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:33:15.978187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:33:15.979941 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:33:15.992221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:33:16.010312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:33:16.020075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:33:16.061066 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:33:16.118941 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:33:16.120752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:33:16.130014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:33:16.144935 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:33:16.147342 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:33:16.155281 disk-uuid[556]: Primary Header is updated. Apr 30 03:33:16.155281 disk-uuid[556]: Secondary Entries is updated. Apr 30 03:33:16.155281 disk-uuid[556]: Secondary Header is updated. Apr 30 03:33:16.159285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:33:16.161642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:33:16.170151 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:33:16.311637 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:33:16.319942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 03:33:16.319970 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:33:16.320635 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:33:16.321639 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:33:16.322640 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:33:16.322660 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:33:16.323295 kernel: ata3.00: applying bridge limits Apr 30 03:33:16.324634 kernel: ata3.00: configured for UDMA/100 Apr 30 03:33:16.326643 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:33:16.384647 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:33:16.398519 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:33:16.398541 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:33:17.164268 disk-uuid[557]: The operation has completed successfully. Apr 30 03:33:17.165854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:33:17.198385 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:33:17.198522 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:33:17.224880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:33:17.228511 sh[593]: Success Apr 30 03:33:17.250642 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:33:17.290016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:33:17.304317 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:33:17.306815 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:33:17.320930 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:33:17.320999 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:33:17.321013 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:33:17.322133 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:33:17.323002 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:33:17.328440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:33:17.331349 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:33:17.343843 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:33:17.347621 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:33:17.386763 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:33:17.386838 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:33:17.388738 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:33:17.391635 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:33:17.402463 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:33:17.404755 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:33:17.415872 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:33:17.420817 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:33:17.569825 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:33:17.580885 ignition[714]: Ignition 2.19.0 Apr 30 03:33:17.580904 ignition[714]: Stage: fetch-offline Apr 30 03:33:17.582898 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:33:17.580994 ignition[714]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:17.581006 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:17.581128 ignition[714]: parsed url from cmdline: "" Apr 30 03:33:17.581133 ignition[714]: no config URL provided Apr 30 03:33:17.581138 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:33:17.581146 ignition[714]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:33:17.581175 ignition[714]: op(1): [started] loading QEMU firmware config module Apr 30 03:33:17.581191 ignition[714]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 03:33:17.592757 ignition[714]: op(1): [finished] loading QEMU firmware config module Apr 30 03:33:17.609943 systemd-networkd[780]: lo: Link UP Apr 30 03:33:17.609955 systemd-networkd[780]: lo: Gained carrier Apr 30 03:33:17.611729 systemd-networkd[780]: Enumeration completed Apr 30 03:33:17.612196 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:33:17.612200 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:33:17.612391 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:33:17.614442 systemd[1]: Reached target network.target - Network. Apr 30 03:33:17.614568 systemd-networkd[780]: eth0: Link UP Apr 30 03:33:17.614572 systemd-networkd[780]: eth0: Gained carrier Apr 30 03:33:17.614579 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:33:17.637701 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:33:17.661290 ignition[714]: parsing config with SHA512: c5b655c2a191135ba01069b8663a6f8e5ac7a1c0e7bc889e5bbee981b72b67900f1b241df20a322c36c296423be2ba65a3d5cc65ba8fb29b2e4c7ae1ebd6a9ba Apr 30 03:33:17.670117 unknown[714]: fetched base config from "system" Apr 30 03:33:17.670135 unknown[714]: fetched user config from "qemu" Apr 30 03:33:17.670740 ignition[714]: fetch-offline: fetch-offline passed Apr 30 03:33:17.670827 ignition[714]: Ignition finished successfully Apr 30 03:33:17.675756 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:33:17.677277 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:33:17.685936 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:33:17.751254 ignition[785]: Ignition 2.19.0 Apr 30 03:33:17.751266 ignition[785]: Stage: kargs Apr 30 03:33:17.751479 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:17.751492 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:17.752358 ignition[785]: kargs: kargs passed Apr 30 03:33:17.755815 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:33:17.752410 ignition[785]: Ignition finished successfully Apr 30 03:33:17.813009 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:33:17.832703 ignition[793]: Ignition 2.19.0 Apr 30 03:33:17.832719 ignition[793]: Stage: disks Apr 30 03:33:17.832948 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:17.832962 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:17.837663 ignition[793]: disks: disks passed Apr 30 03:33:17.838447 ignition[793]: Ignition finished successfully Apr 30 03:33:17.841967 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:33:17.842631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:33:17.844971 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:33:17.845327 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:33:17.845881 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:33:17.846232 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:33:17.875008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:33:17.890639 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:33:17.897733 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:33:17.905778 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:33:18.013649 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:33:18.014411 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:33:18.016786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:33:18.028772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:33:18.032209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:33:18.035266 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:33:18.042648 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Apr 30 03:33:18.042671 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:33:18.042694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:33:18.042708 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:33:18.042722 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:33:18.035348 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:33:18.042750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:33:18.049643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:33:18.053070 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:33:18.057726 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:33:18.103532 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:33:18.109437 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:33:18.114966 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:33:18.120705 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:33:18.229794 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:33:18.282742 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:33:18.286646 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:33:18.312651 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:33:18.319281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:33:18.333752 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:33:18.377394 ignition[930]: INFO : Ignition 2.19.0 Apr 30 03:33:18.377394 ignition[930]: INFO : Stage: mount Apr 30 03:33:18.379198 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:18.379198 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:18.379198 ignition[930]: INFO : mount: mount passed Apr 30 03:33:18.379198 ignition[930]: INFO : Ignition finished successfully Apr 30 03:33:18.385066 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:33:18.398831 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:33:18.413845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:33:18.427250 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Apr 30 03:33:18.427295 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:33:18.428310 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:33:18.428330 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:33:18.431641 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:33:18.433033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:33:18.456788 ignition[956]: INFO : Ignition 2.19.0 Apr 30 03:33:18.456788 ignition[956]: INFO : Stage: files Apr 30 03:33:18.458498 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:18.458498 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:18.458498 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:33:18.462479 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:33:18.462479 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:33:18.467415 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:33:18.469185 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:33:18.471031 unknown[956]: wrote ssh authorized keys file for user: core Apr 30 03:33:18.472270 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:33:18.473840 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:33:18.476035 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:33:18.554549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:33:18.654196 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:33:18.654196 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:33:18.658844 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:33:18.660904 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:33:18.664404 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:33:18.664404 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:33:18.664404 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:33:18.664404 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:33:18.684580 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 03:33:18.976952 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 03:33:19.145308 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:33:19.834011 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 03:33:19.834011 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 03:33:19.839489 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 03:33:19.879799 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:33:19.891366 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:33:19.891366 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 03:33:19.891366 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:33:19.891366 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:33:19.899513 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:33:19.899513 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:33:19.899513 ignition[956]: INFO : files: files passed Apr 30 03:33:19.899513 ignition[956]: INFO : Ignition finished successfully Apr 30 03:33:19.896086 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:33:19.903926 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:33:19.907713 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:33:19.909720 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:33:19.909845 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:33:19.919342 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 03:33:19.922553 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:33:19.922553 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:33:19.927542 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:33:19.925647 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:33:19.927852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:33:19.938893 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:33:19.973462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:33:19.973683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:33:19.976281 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:33:19.978426 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:33:19.980809 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:33:19.990838 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:33:20.009581 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:33:20.021785 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:33:20.033838 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:33:20.035176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:33:20.037488 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:33:20.039595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:33:20.039736 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:33:20.041912 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:33:20.043655 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:33:20.045706 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:33:20.047788 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:33:20.049984 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:33:20.052184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:33:20.054337 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:33:20.056675 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:33:20.058723 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:33:20.060914 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:33:20.062820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:33:20.062995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:33:20.065044 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:33:20.066695 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:33:20.068817 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:33:20.068931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:33:20.071133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:33:20.071246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:33:20.073507 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:33:20.073636 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:33:20.075680 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:33:20.077438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:33:20.077584 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:33:20.080156 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:33:20.082045 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:33:20.084094 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:33:20.084194 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:33:20.086123 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:33:20.086216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:33:20.088249 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:33:20.088365 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:33:20.090296 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:33:20.090403 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:33:20.103756 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:33:20.105057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:33:20.105174 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:33:20.108061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:33:20.109146 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:33:20.109292 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:33:20.111785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:33:20.111995 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:33:20.117209 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:33:20.120792 ignition[1012]: INFO : Ignition 2.19.0 Apr 30 03:33:20.120792 ignition[1012]: INFO : Stage: umount Apr 30 03:33:20.120792 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:33:20.120792 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:33:20.117326 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:33:20.128181 ignition[1012]: INFO : umount: umount passed Apr 30 03:33:20.128181 ignition[1012]: INFO : Ignition finished successfully Apr 30 03:33:20.123548 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:33:20.123727 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:33:20.126089 systemd[1]: Stopped target network.target - Network. Apr 30 03:33:20.128177 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:33:20.128241 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:33:20.130059 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:33:20.130108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:33:20.131981 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:33:20.132030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:33:20.133958 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:33:20.134006 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:33:20.136164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:33:20.138224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:33:20.141426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:33:20.144689 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 03:33:20.148765 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:33:20.148968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:33:20.151794 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:33:20.151945 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:33:20.155149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:33:20.155225 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:33:20.168733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:33:20.170715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:33:20.170775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:33:20.173071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:33:20.173120 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:33:20.175540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:33:20.175588 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:33:20.177708 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:33:20.177757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:33:20.180199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:33:20.190419 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:33:20.190577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:33:20.199552 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:33:20.199799 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:33:20.202054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:33:20.202106 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:33:20.204161 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:33:20.204205 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:33:20.206215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:33:20.206272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:33:20.208575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:33:20.208645 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:33:20.210402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:33:20.210452 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:33:20.222779 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:33:20.223233 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:33:20.223291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:33:20.223637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:33:20.223685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:33:20.231746 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:33:20.231879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:33:20.292755 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:33:20.292911 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:33:20.293951 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:33:20.296022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:33:20.296076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:33:20.309740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:33:20.316442 systemd[1]: Switching root. Apr 30 03:33:20.347293 systemd-journald[193]: Journal stopped Apr 30 03:33:21.659343 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 30 03:33:21.659423 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:33:21.659443 kernel: SELinux: policy capability open_perms=1 Apr 30 03:33:21.659457 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:33:21.659471 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:33:21.659489 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:33:21.659502 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:33:21.659521 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:33:21.659538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:33:21.659552 kernel: audit: type=1403 audit(1745984000.801:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:33:21.659567 systemd[1]: Successfully loaded SELinux policy in 41.326ms. Apr 30 03:33:21.659661 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.759ms. Apr 30 03:33:21.659679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:33:21.659694 systemd[1]: Detected virtualization kvm. Apr 30 03:33:21.659708 systemd[1]: Detected architecture x86-64. Apr 30 03:33:21.659722 systemd[1]: Detected first boot. Apr 30 03:33:21.659742 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:33:21.659759 zram_generator::config[1057]: No configuration found. Apr 30 03:33:21.659775 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:33:21.659789 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:33:21.659803 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:33:21.659818 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:33:21.659832 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:33:21.659847 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:33:21.659861 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:33:21.659879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:33:21.659894 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:33:21.659909 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:33:21.659924 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:33:21.659938 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:33:21.659958 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:33:21.659973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:33:21.659987 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:33:21.660001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:33:21.660018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:33:21.660033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:33:21.660047 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:33:21.660061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:33:21.660076 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:33:21.660090 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:33:21.660104 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:33:21.660121 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:33:21.660135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:33:21.660149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:33:21.660163 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:33:21.660177 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:33:21.660191 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:33:21.660205 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:33:21.660219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:33:21.660233 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:33:21.660247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:33:21.660265 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:33:21.660280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:33:21.660297 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:33:21.660311 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:33:21.660325 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:21.660339 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:33:21.660354 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:33:21.660368 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:33:21.660382 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:33:21.660400 systemd[1]: Reached target machines.target - Containers. Apr 30 03:33:21.660414 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:33:21.660428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:33:21.660443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:33:21.660457 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:33:21.660471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:33:21.660486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:33:21.660500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:33:21.660517 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:33:21.660531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:33:21.660546 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:33:21.660560 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:33:21.660576 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:33:21.660591 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:33:21.660626 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:33:21.660650 kernel: fuse: init (API version 7.39) Apr 30 03:33:21.660667 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:33:21.660681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:33:21.660696 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:33:21.660711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:33:21.660724 kernel: loop: module loaded Apr 30 03:33:21.660739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:33:21.660753 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:33:21.660767 systemd[1]: Stopped verity-setup.service. Apr 30 03:33:21.660782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:21.660799 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:33:21.660814 kernel: ACPI: bus type drm_connector registered Apr 30 03:33:21.660836 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:33:21.660850 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:33:21.660864 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:33:21.660878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:33:21.660895 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:33:21.660928 systemd-journald[1135]: Collecting audit messages is disabled. Apr 30 03:33:21.660955 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:33:21.660970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:33:21.660984 systemd-journald[1135]: Journal started Apr 30 03:33:21.661013 systemd-journald[1135]: Runtime Journal (/run/log/journal/3d32e33ec853497b941f2ec5a7044fba) is 6.0M, max 48.4M, 42.3M free. Apr 30 03:33:21.397736 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:33:21.416396 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:33:21.416892 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:33:21.664626 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:33:21.665886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:33:21.666071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:33:21.667581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:33:21.667926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:33:21.669452 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:33:21.669655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:33:21.671018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:33:21.671187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:33:21.672710 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:33:21.672883 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:33:21.674252 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:33:21.674420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:33:21.675815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:33:21.677209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:33:21.678742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:33:21.692141 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:33:21.701685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:33:21.704088 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:33:21.705377 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:33:21.705411 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:33:21.707621 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:33:21.710104 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:33:21.712988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:33:21.714358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:33:21.718908 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:33:21.723147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:33:21.724474 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:33:21.734041 systemd-journald[1135]: Time spent on flushing to /var/log/journal/3d32e33ec853497b941f2ec5a7044fba is 24.468ms for 945 entries. Apr 30 03:33:21.734041 systemd-journald[1135]: System Journal (/var/log/journal/3d32e33ec853497b941f2ec5a7044fba) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:33:21.777855 systemd-journald[1135]: Received client request to flush runtime journal. Apr 30 03:33:21.731297 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:33:21.732589 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:33:21.735380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:33:21.739738 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:33:21.744771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:33:21.748705 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:33:21.750737 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:33:21.752360 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:33:21.760416 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:33:21.769039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:33:21.774122 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:33:21.783574 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:33:21.786893 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:33:21.789095 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:33:21.797392 kernel: loop0: detected capacity change from 0 to 142488 Apr 30 03:33:21.796707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:33:21.811917 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:33:21.819691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:33:21.820456 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:33:21.825072 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:33:21.828166 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:33:21.837799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:33:21.845681 kernel: loop1: detected capacity change from 0 to 205544 Apr 30 03:33:21.913828 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 30 03:33:21.913850 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 30 03:33:21.920974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:33:21.932808 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:33:21.974649 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:33:22.010638 kernel: loop4: detected capacity change from 0 to 205544 Apr 30 03:33:22.020644 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:33:22.030084 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 03:33:22.030739 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 30 03:33:22.036824 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:33:22.036845 systemd[1]: Reloading... Apr 30 03:33:22.132868 zram_generator::config[1221]: No configuration found. Apr 30 03:33:22.279368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:33:22.302088 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:33:22.332866 systemd[1]: Reloading finished in 295 ms. Apr 30 03:33:22.370294 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:33:22.372154 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:33:22.392777 systemd[1]: Starting ensure-sysext.service... Apr 30 03:33:22.396791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:33:22.402981 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:33:22.402997 systemd[1]: Reloading... Apr 30 03:33:22.438774 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:33:22.439168 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:33:22.440257 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:33:22.440553 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 30 03:33:22.441280 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 30 03:33:22.447910 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:33:22.448010 systemd-tmpfiles[1260]: Skipping /boot Apr 30 03:33:22.461526 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:33:22.461795 systemd-tmpfiles[1260]: Skipping /boot Apr 30 03:33:22.465628 zram_generator::config[1286]: No configuration found. Apr 30 03:33:22.586445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:33:22.646301 systemd[1]: Reloading finished in 242 ms. Apr 30 03:33:22.678222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:33:22.686280 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:33:22.791957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:33:22.795111 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:33:22.798929 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:33:22.813334 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:33:22.816734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.816906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:33:22.819341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:33:22.822728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:33:22.828492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:33:22.830961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:33:22.836509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:33:22.857102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.858302 augenrules[1348]: No rules Apr 30 03:33:22.858548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:33:22.858757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:33:22.874778 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:33:22.876342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:33:22.876511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:33:22.878210 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:33:22.878380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:33:22.887039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.887228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:33:22.897958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:33:22.903326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:33:22.909107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:33:22.910725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:33:22.910878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.912268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:33:22.914167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:33:22.916351 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:33:22.918630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:33:22.918906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:33:22.921794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:33:22.922022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:33:22.924797 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:33:22.925178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:33:22.939775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.940084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:33:22.946454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:33:22.949593 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:33:22.952043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:33:22.956756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:33:22.958333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:33:22.958597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:33:22.959771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:33:22.962027 systemd[1]: Finished ensure-sysext.service. Apr 30 03:33:22.963835 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:33:22.966008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:33:22.966296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:33:22.968228 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:33:22.968455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:33:22.970336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:33:22.970577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:33:22.972871 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:33:22.973078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:33:22.980831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:33:22.980930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:33:22.984153 systemd-resolved[1338]: Positive Trust Anchors: Apr 30 03:33:22.984175 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:33:22.984220 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:33:22.989180 systemd-resolved[1338]: Defaulting to hostname 'linux'. Apr 30 03:33:22.989748 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:33:22.993212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:33:22.996150 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:33:22.997581 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:33:22.997925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:33:23.001931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:33:23.013211 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:33:23.023683 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Apr 30 03:33:23.041533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:33:23.055016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:33:23.079156 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:33:23.081112 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:33:23.092478 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:33:23.128393 systemd-networkd[1393]: lo: Link UP Apr 30 03:33:23.128408 systemd-networkd[1393]: lo: Gained carrier Apr 30 03:33:23.130471 systemd-networkd[1393]: Enumeration completed Apr 30 03:33:23.130555 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:33:23.132337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1394) Apr 30 03:33:23.132362 systemd[1]: Reached target network.target - Network. Apr 30 03:33:23.134374 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:33:23.134385 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:33:23.136698 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:33:23.136737 systemd-networkd[1393]: eth0: Link UP Apr 30 03:33:23.136742 systemd-networkd[1393]: eth0: Gained carrier Apr 30 03:33:23.136751 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:33:23.193140 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:33:23.202849 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:33:23.203183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:33:23.203205 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:33:23.203401 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:33:23.205538 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:33:23.206800 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Apr 30 03:33:23.782836 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 03:33:23.782955 systemd-timesyncd[1381]: Initial clock synchronization to Wed 2025-04-30 03:33:23.782743 UTC. Apr 30 03:33:23.783301 systemd-resolved[1338]: Clock change detected. Flushing caches. Apr 30 03:33:23.786620 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:33:23.794603 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:33:23.807575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:33:23.817825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:33:23.843874 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:33:23.902268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:33:23.909635 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:33:23.923659 kernel: kvm_amd: TSC scaling supported Apr 30 03:33:23.923767 kernel: kvm_amd: Nested Virtualization enabled Apr 30 03:33:23.924764 kernel: kvm_amd: Nested Paging enabled Apr 30 03:33:23.924801 kernel: kvm_amd: LBR virtualization supported Apr 30 03:33:23.925640 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 03:33:23.925667 kernel: kvm_amd: Virtual GIF supported Apr 30 03:33:23.950400 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:33:23.985375 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:33:24.008890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:33:24.021853 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:33:24.034809 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:33:24.114257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:33:24.115963 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:33:24.117244 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:33:24.118575 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:33:24.120034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:33:24.121733 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:33:24.123128 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:33:24.124697 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:33:24.126125 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:33:24.126169 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:33:24.127203 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:33:24.129206 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:33:24.132388 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:33:24.138120 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:33:24.140861 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:33:24.142827 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:33:24.144179 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:33:24.145307 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:33:24.146403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:33:24.146442 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:33:24.147907 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:33:24.150659 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:33:24.155093 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:33:24.158470 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:33:24.159946 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:33:24.161740 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:33:24.163635 jq[1437]: false Apr 30 03:33:24.164793 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:33:24.167965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:33:24.171385 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:33:24.172419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:33:24.184868 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:33:24.186806 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:33:24.187475 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:33:24.188915 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:33:24.193135 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:33:24.198144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:33:24.198275 extend-filesystems[1438]: Found loop3 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found loop4 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found loop5 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found sr0 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda1 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda2 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda3 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found usr Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda4 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda6 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda7 Apr 30 03:33:24.199521 extend-filesystems[1438]: Found vda9 Apr 30 03:33:24.199521 extend-filesystems[1438]: Checking size of /dev/vda9 Apr 30 03:33:24.198534 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:33:24.203000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:33:24.226799 jq[1451]: true Apr 30 03:33:24.203445 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:33:24.221764 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:33:24.234177 update_engine[1448]: I20250430 03:33:24.233814 1448 main.cc:92] Flatcar Update Engine starting Apr 30 03:33:24.240275 dbus-daemon[1436]: [system] SELinux support is enabled Apr 30 03:33:24.240762 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:33:24.246476 jq[1457]: true Apr 30 03:33:24.253843 extend-filesystems[1438]: Resized partition /dev/vda9 Apr 30 03:33:24.254827 update_engine[1448]: I20250430 03:33:24.249775 1448 update_check_scheduler.cc:74] Next update check in 8m53s Apr 30 03:33:24.254889 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:33:24.254661 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:33:24.256343 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:33:24.256601 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:33:24.259213 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:33:24.259756 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 03:33:24.259239 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:33:24.259969 systemd-logind[1444]: New seat seat0. Apr 30 03:33:24.266862 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:33:24.272576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1401) Apr 30 03:33:24.280184 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:33:24.280270 tar[1456]: linux-amd64/helm Apr 30 03:33:24.282014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:33:24.282471 dbus-daemon[1436]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:33:24.282046 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:33:24.284390 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:33:24.284420 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:33:24.295601 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 03:33:24.342442 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:33:24.355816 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:33:24.355816 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:33:24.355816 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 03:33:24.361763 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Apr 30 03:33:24.365269 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:33:24.365496 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:33:24.410810 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:33:24.450038 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:33:24.493287 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:33:24.519240 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:33:24.528509 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:33:24.528822 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:33:24.550525 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:33:24.629203 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:33:24.634339 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:33:24.641149 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:33:24.644281 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:33:24.645919 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:33:24.648333 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:33:24.653714 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:33:24.808733 containerd[1463]: time="2025-04-30T03:33:24.808504642Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:33:24.831569 containerd[1463]: time="2025-04-30T03:33:24.831478485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.833965 containerd[1463]: time="2025-04-30T03:33:24.833918842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:33:24.833965 containerd[1463]: time="2025-04-30T03:33:24.833959959Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:33:24.834045 containerd[1463]: time="2025-04-30T03:33:24.833981319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:33:24.834273 containerd[1463]: time="2025-04-30T03:33:24.834243341Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:33:24.834320 containerd[1463]: time="2025-04-30T03:33:24.834273237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834397 containerd[1463]: time="2025-04-30T03:33:24.834375449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834427 containerd[1463]: time="2025-04-30T03:33:24.834396017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834732 containerd[1463]: time="2025-04-30T03:33:24.834697693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834732 containerd[1463]: time="2025-04-30T03:33:24.834723582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834810 containerd[1463]: time="2025-04-30T03:33:24.834746525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834810 containerd[1463]: time="2025-04-30T03:33:24.834760521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.834914 containerd[1463]: time="2025-04-30T03:33:24.834894272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.835272 containerd[1463]: time="2025-04-30T03:33:24.835241944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:33:24.835429 containerd[1463]: time="2025-04-30T03:33:24.835398848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:33:24.835429 containerd[1463]: time="2025-04-30T03:33:24.835421731Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:33:24.835607 containerd[1463]: time="2025-04-30T03:33:24.835566042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:33:24.835689 containerd[1463]: time="2025-04-30T03:33:24.835664937Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:33:24.842652 containerd[1463]: time="2025-04-30T03:33:24.842609744Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:33:24.842728 containerd[1463]: time="2025-04-30T03:33:24.842700624Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:33:24.842756 containerd[1463]: time="2025-04-30T03:33:24.842724559Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:33:24.842756 containerd[1463]: time="2025-04-30T03:33:24.842743425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:33:24.842804 containerd[1463]: time="2025-04-30T03:33:24.842757190Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:33:24.842952 containerd[1463]: time="2025-04-30T03:33:24.842930395Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:33:24.843217 containerd[1463]: time="2025-04-30T03:33:24.843189641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:33:24.843366 containerd[1463]: time="2025-04-30T03:33:24.843340364Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:33:24.843397 containerd[1463]: time="2025-04-30T03:33:24.843366032Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:33:24.843397 containerd[1463]: time="2025-04-30T03:33:24.843384136Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:33:24.843442 containerd[1463]: time="2025-04-30T03:33:24.843405506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843442 containerd[1463]: time="2025-04-30T03:33:24.843423180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843495 containerd[1463]: time="2025-04-30T03:33:24.843443177Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843495 containerd[1463]: time="2025-04-30T03:33:24.843461061Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843495 containerd[1463]: time="2025-04-30T03:33:24.843480758Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843597 containerd[1463]: time="2025-04-30T03:33:24.843497699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843597 containerd[1463]: time="2025-04-30T03:33:24.843526073Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843597 containerd[1463]: time="2025-04-30T03:33:24.843543635Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:33:24.843597 containerd[1463]: time="2025-04-30T03:33:24.843565917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843597416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843615190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843629717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843644935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843662478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843678368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843698 containerd[1463]: time="2025-04-30T03:33:24.843694979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843712592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843731007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843745474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843760653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843777985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843819804Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:33:24.843861 containerd[1463]: time="2025-04-30T03:33:24.843851994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.844023 containerd[1463]: time="2025-04-30T03:33:24.843881129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.844023 containerd[1463]: time="2025-04-30T03:33:24.843899413Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:33:24.844023 containerd[1463]: time="2025-04-30T03:33:24.843986526Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:33:24.844023 containerd[1463]: time="2025-04-30T03:33:24.844012595Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844027182Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844044164Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844057810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844079080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844093026Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:33:24.844121 containerd[1463]: time="2025-04-30T03:33:24.844106461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:33:24.844481 containerd[1463]: time="2025-04-30T03:33:24.844427193Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:33:24.844680 containerd[1463]: time="2025-04-30T03:33:24.844502003Z" level=info msg="Connect containerd service" Apr 30 03:33:24.844680 containerd[1463]: time="2025-04-30T03:33:24.844560803Z" level=info msg="using legacy CRI server" Apr 30 03:33:24.844680 containerd[1463]: time="2025-04-30T03:33:24.844591711Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:33:24.844759 containerd[1463]: time="2025-04-30T03:33:24.844740290Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:33:24.845642 containerd[1463]: time="2025-04-30T03:33:24.845480038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:33:24.845790 containerd[1463]: time="2025-04-30T03:33:24.845698738Z" level=info msg="Start subscribing containerd event" Apr 30 03:33:24.845790 containerd[1463]: time="2025-04-30T03:33:24.845776243Z" level=info msg="Start recovering state" Apr 30 03:33:24.845981 containerd[1463]: time="2025-04-30T03:33:24.845851454Z" level=info msg="Start event monitor" Apr 30 03:33:24.845981 containerd[1463]: time="2025-04-30T03:33:24.845885618Z" level=info msg="Start snapshots syncer" Apr 30 03:33:24.845981 containerd[1463]: time="2025-04-30T03:33:24.845902500Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:33:24.845981 containerd[1463]: time="2025-04-30T03:33:24.845914643Z" level=info msg="Start streaming server" Apr 30 03:33:24.848812 containerd[1463]: time="2025-04-30T03:33:24.848780819Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:33:24.848863 containerd[1463]: time="2025-04-30T03:33:24.848843206Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:33:24.849048 containerd[1463]: time="2025-04-30T03:33:24.848923917Z" level=info msg="containerd successfully booted in 0.043534s" Apr 30 03:33:24.849073 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:33:25.027201 tar[1456]: linux-amd64/LICENSE Apr 30 03:33:25.027346 tar[1456]: linux-amd64/README.md Apr 30 03:33:25.043202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:33:25.631824 systemd-networkd[1393]: eth0: Gained IPv6LL Apr 30 03:33:25.635460 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:33:25.637978 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:33:25.652050 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 03:33:25.655188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:25.657767 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:33:25.678938 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 03:33:25.679271 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 03:33:25.681283 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:33:25.685291 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:33:27.017234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:27.018979 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:33:27.020295 systemd[1]: Startup finished in 1.065s (kernel) + 6.030s (initrd) + 5.684s (userspace) = 12.779s. Apr 30 03:33:27.043145 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:33:27.948649 kubelet[1548]: E0430 03:33:27.948552 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:33:27.953860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:33:27.954104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:33:27.954531 systemd[1]: kubelet.service: Consumed 2.132s CPU time. Apr 30 03:33:28.441540 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:33:28.443003 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:47332.service - OpenSSH per-connection server daemon (10.0.0.1:47332). Apr 30 03:33:28.491305 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:28.493640 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:28.505278 systemd-logind[1444]: New session 1 of user core. Apr 30 03:33:28.507120 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:33:28.516946 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:33:28.532028 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:33:28.535325 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:33:28.545958 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:33:28.675740 systemd[1566]: Queued start job for default target default.target. Apr 30 03:33:28.688012 systemd[1566]: Created slice app.slice - User Application Slice. Apr 30 03:33:28.688042 systemd[1566]: Reached target paths.target - Paths. Apr 30 03:33:28.688056 systemd[1566]: Reached target timers.target - Timers. Apr 30 03:33:28.689869 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:33:28.701932 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:33:28.702101 systemd[1566]: Reached target sockets.target - Sockets. Apr 30 03:33:28.702122 systemd[1566]: Reached target basic.target - Basic System. Apr 30 03:33:28.702179 systemd[1566]: Reached target default.target - Main User Target. Apr 30 03:33:28.702217 systemd[1566]: Startup finished in 147ms. Apr 30 03:33:28.702574 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:33:28.704236 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:33:28.766503 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). Apr 30 03:33:28.805279 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:28.807350 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:28.812169 systemd-logind[1444]: New session 2 of user core. Apr 30 03:33:28.822752 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:33:28.879680 sshd[1577]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:28.897851 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:47340.service: Deactivated successfully. Apr 30 03:33:28.899864 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:33:28.901308 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:33:28.902731 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:47348.service - OpenSSH per-connection server daemon (10.0.0.1:47348). Apr 30 03:33:28.903763 systemd-logind[1444]: Removed session 2. Apr 30 03:33:28.944652 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 47348 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:28.946336 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:28.950626 systemd-logind[1444]: New session 3 of user core. Apr 30 03:33:28.960694 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:33:29.011864 sshd[1584]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:29.026848 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:47348.service: Deactivated successfully. Apr 30 03:33:29.028796 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:33:29.030510 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:33:29.040851 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:47356.service - OpenSSH per-connection server daemon (10.0.0.1:47356). Apr 30 03:33:29.042123 systemd-logind[1444]: Removed session 3. Apr 30 03:33:29.069996 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 47356 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:29.071642 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:29.075831 systemd-logind[1444]: New session 4 of user core. Apr 30 03:33:29.086809 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:33:29.143179 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:29.152598 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:47356.service: Deactivated successfully. Apr 30 03:33:29.154524 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:33:29.156269 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:33:29.171850 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:47360.service - OpenSSH per-connection server daemon (10.0.0.1:47360). Apr 30 03:33:29.172897 systemd-logind[1444]: Removed session 4. Apr 30 03:33:29.201801 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 47360 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:29.203464 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:29.207734 systemd-logind[1444]: New session 5 of user core. Apr 30 03:33:29.216694 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:33:29.274778 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:33:29.275197 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:33:29.293669 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 30 03:33:29.295737 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:29.308403 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:47360.service: Deactivated successfully. Apr 30 03:33:29.310118 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:33:29.311613 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:33:29.320812 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:47376.service - OpenSSH per-connection server daemon (10.0.0.1:47376). Apr 30 03:33:29.321558 systemd-logind[1444]: Removed session 5. Apr 30 03:33:29.351827 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 47376 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:29.353424 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:29.357160 systemd-logind[1444]: New session 6 of user core. Apr 30 03:33:29.367694 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:33:29.423610 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:33:29.423951 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:33:29.428511 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 30 03:33:29.437092 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:33:29.437544 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:33:29.460899 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:33:29.463016 auditctl[1613]: No rules Apr 30 03:33:29.464448 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:33:29.464781 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:33:29.466878 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:33:29.502208 augenrules[1631]: No rules Apr 30 03:33:29.504292 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:33:29.505855 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 30 03:33:29.507873 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:29.514875 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:47376.service: Deactivated successfully. Apr 30 03:33:29.516905 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:33:29.519161 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:33:29.529956 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:47390.service - OpenSSH per-connection server daemon (10.0.0.1:47390). Apr 30 03:33:29.531204 systemd-logind[1444]: Removed session 6. Apr 30 03:33:29.560696 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 47390 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:33:29.562276 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:29.566781 systemd-logind[1444]: New session 7 of user core. Apr 30 03:33:29.584902 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:33:29.640701 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:33:29.641124 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:33:30.093902 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:33:30.093997 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:33:30.378574 dockerd[1660]: time="2025-04-30T03:33:30.378412257Z" level=info msg="Starting up" Apr 30 03:33:30.814313 dockerd[1660]: time="2025-04-30T03:33:30.814189841Z" level=info msg="Loading containers: start." Apr 30 03:33:30.932607 kernel: Initializing XFRM netlink socket Apr 30 03:33:31.006222 systemd-networkd[1393]: docker0: Link UP Apr 30 03:33:31.035231 dockerd[1660]: time="2025-04-30T03:33:31.035162883Z" level=info msg="Loading containers: done." Apr 30 03:33:31.049952 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck615298736-merged.mount: Deactivated successfully. Apr 30 03:33:31.052918 dockerd[1660]: time="2025-04-30T03:33:31.052865318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:33:31.053013 dockerd[1660]: time="2025-04-30T03:33:31.052987227Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:33:31.053141 dockerd[1660]: time="2025-04-30T03:33:31.053117441Z" level=info msg="Daemon has completed initialization" Apr 30 03:33:31.093983 dockerd[1660]: time="2025-04-30T03:33:31.093387757Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:33:31.093732 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:33:31.801802 containerd[1463]: time="2025-04-30T03:33:31.801752246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 03:33:32.588720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334580871.mount: Deactivated successfully. Apr 30 03:33:33.518558 containerd[1463]: time="2025-04-30T03:33:33.518481499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:33.519633 containerd[1463]: time="2025-04-30T03:33:33.519155293Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Apr 30 03:33:33.520407 containerd[1463]: time="2025-04-30T03:33:33.520341839Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:33.523487 containerd[1463]: time="2025-04-30T03:33:33.523436734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:33.524685 containerd[1463]: time="2025-04-30T03:33:33.524645581Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.722845426s" Apr 30 03:33:33.524720 containerd[1463]: time="2025-04-30T03:33:33.524695425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 03:33:33.526168 containerd[1463]: time="2025-04-30T03:33:33.526133623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 03:33:35.901437 containerd[1463]: time="2025-04-30T03:33:35.901342355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:35.907868 containerd[1463]: time="2025-04-30T03:33:35.907811059Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Apr 30 03:33:35.950452 containerd[1463]: time="2025-04-30T03:33:35.950383793Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:35.964502 containerd[1463]: time="2025-04-30T03:33:35.964437694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:35.965544 containerd[1463]: time="2025-04-30T03:33:35.965503884Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.43933215s" Apr 30 03:33:35.965650 containerd[1463]: time="2025-04-30T03:33:35.965544600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 03:33:35.966041 containerd[1463]: time="2025-04-30T03:33:35.966011536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 03:33:37.712268 containerd[1463]: time="2025-04-30T03:33:37.712168397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:37.712915 containerd[1463]: time="2025-04-30T03:33:37.712851018Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Apr 30 03:33:37.714180 containerd[1463]: time="2025-04-30T03:33:37.714132692Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:37.717861 containerd[1463]: time="2025-04-30T03:33:37.717829135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:37.719070 containerd[1463]: time="2025-04-30T03:33:37.719014729Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.752968728s" Apr 30 03:33:37.719117 containerd[1463]: time="2025-04-30T03:33:37.719070644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 03:33:37.719665 containerd[1463]: time="2025-04-30T03:33:37.719633830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 03:33:38.204375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:33:38.218794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:38.413543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:38.419012 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:33:38.468189 kubelet[1878]: E0430 03:33:38.467940 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:33:38.474845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:33:38.475090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:33:39.658891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275496657.mount: Deactivated successfully. Apr 30 03:33:40.828083 containerd[1463]: time="2025-04-30T03:33:40.827970521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:40.853659 containerd[1463]: time="2025-04-30T03:33:40.853539461Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Apr 30 03:33:40.901871 containerd[1463]: time="2025-04-30T03:33:40.901780127Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:40.921747 containerd[1463]: time="2025-04-30T03:33:40.921670406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:40.922349 containerd[1463]: time="2025-04-30T03:33:40.922311127Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.202638394s" Apr 30 03:33:40.922396 containerd[1463]: time="2025-04-30T03:33:40.922347536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 03:33:40.923006 containerd[1463]: time="2025-04-30T03:33:40.922970474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:33:41.851301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235108134.mount: Deactivated successfully. Apr 30 03:33:43.600660 containerd[1463]: time="2025-04-30T03:33:43.600545981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:43.601457 containerd[1463]: time="2025-04-30T03:33:43.601396075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:33:43.603206 containerd[1463]: time="2025-04-30T03:33:43.603118025Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:43.608293 containerd[1463]: time="2025-04-30T03:33:43.606945755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:43.609910 containerd[1463]: time="2025-04-30T03:33:43.609849090Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.68684321s" Apr 30 03:33:43.609974 containerd[1463]: time="2025-04-30T03:33:43.609912139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:33:43.610754 containerd[1463]: time="2025-04-30T03:33:43.610724523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:33:44.188489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719335984.mount: Deactivated successfully. Apr 30 03:33:44.195156 containerd[1463]: time="2025-04-30T03:33:44.195092891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:44.195821 containerd[1463]: time="2025-04-30T03:33:44.195759952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 03:33:44.196908 containerd[1463]: time="2025-04-30T03:33:44.196872579Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:44.199142 containerd[1463]: time="2025-04-30T03:33:44.199106279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:44.199877 containerd[1463]: time="2025-04-30T03:33:44.199835456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 589.069366ms" Apr 30 03:33:44.199877 containerd[1463]: time="2025-04-30T03:33:44.199871544Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:33:44.200462 containerd[1463]: time="2025-04-30T03:33:44.200434590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 03:33:44.709032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243808214.mount: Deactivated successfully. Apr 30 03:33:46.764766 containerd[1463]: time="2025-04-30T03:33:46.764692455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:46.765742 containerd[1463]: time="2025-04-30T03:33:46.765623271Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Apr 30 03:33:46.767563 containerd[1463]: time="2025-04-30T03:33:46.767530108Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:46.771562 containerd[1463]: time="2025-04-30T03:33:46.771499433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:46.772902 containerd[1463]: time="2025-04-30T03:33:46.772811895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.572330357s" Apr 30 03:33:46.772967 containerd[1463]: time="2025-04-30T03:33:46.772895642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 03:33:48.725451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:33:48.734746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:48.919359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:48.925893 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:33:49.065059 kubelet[2027]: E0430 03:33:49.064823 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:33:49.070042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:33:49.070271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:33:49.148877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:49.156901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:49.183899 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)... Apr 30 03:33:49.183919 systemd[1]: Reloading... Apr 30 03:33:49.276298 zram_generator::config[2082]: No configuration found. Apr 30 03:33:49.699883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:33:49.784483 systemd[1]: Reloading finished in 600 ms. Apr 30 03:33:49.841725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:49.846178 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:33:49.846470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:49.848367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:33:50.012819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:33:50.017572 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:33:50.067457 kubelet[2132]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:33:50.067457 kubelet[2132]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:33:50.067457 kubelet[2132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:33:50.068017 kubelet[2132]: I0430 03:33:50.067509 2132 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:33:50.508787 kubelet[2132]: I0430 03:33:50.508630 2132 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:33:50.508787 kubelet[2132]: I0430 03:33:50.508668 2132 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:33:50.508987 kubelet[2132]: I0430 03:33:50.508968 2132 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:33:50.541804 kubelet[2132]: I0430 03:33:50.541719 2132 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:33:50.542321 kubelet[2132]: E0430 03:33:50.542273 2132 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:50.551628 kubelet[2132]: E0430 03:33:50.551538 2132 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:33:50.551628 kubelet[2132]: I0430 03:33:50.551617 2132 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:33:50.559550 kubelet[2132]: I0430 03:33:50.559506 2132 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:33:50.560816 kubelet[2132]: I0430 03:33:50.560774 2132 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:33:50.561035 kubelet[2132]: I0430 03:33:50.560981 2132 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:33:50.561368 kubelet[2132]: I0430 03:33:50.561030 2132 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:33:50.561460 kubelet[2132]: I0430 03:33:50.561380 2132 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:33:50.561460 kubelet[2132]: I0430 03:33:50.561391 2132 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:33:50.561563 kubelet[2132]: I0430 03:33:50.561539 2132 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:33:50.563638 kubelet[2132]: I0430 03:33:50.563609 2132 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:33:50.563638 kubelet[2132]: I0430 03:33:50.563632 2132 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:33:50.563707 kubelet[2132]: I0430 03:33:50.563687 2132 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:33:50.563740 kubelet[2132]: I0430 03:33:50.563717 2132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:33:50.568403 kubelet[2132]: I0430 03:33:50.568358 2132 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:33:50.570960 kubelet[2132]: I0430 03:33:50.570937 2132 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:33:50.572295 kubelet[2132]: W0430 03:33:50.572098 2132 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:33:50.572295 kubelet[2132]: W0430 03:33:50.572167 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:50.572295 kubelet[2132]: E0430 03:33:50.572237 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:50.572424 kubelet[2132]: W0430 03:33:50.572341 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:50.572913 kubelet[2132]: E0430 03:33:50.572656 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:50.573461 kubelet[2132]: I0430 03:33:50.573431 2132 server.go:1269] "Started kubelet" Apr 30 03:33:50.575048 kubelet[2132]: I0430 03:33:50.574892 2132 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:33:50.575048 kubelet[2132]: I0430 03:33:50.575003 2132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:33:50.576026 kubelet[2132]: I0430 03:33:50.575421 2132 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:33:50.576179 kubelet[2132]: I0430 03:33:50.576148 2132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:33:50.578113 kubelet[2132]: I0430 03:33:50.577411 2132 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:33:50.578281 kubelet[2132]: I0430 03:33:50.578248 2132 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:33:50.579163 kubelet[2132]: I0430 03:33:50.579149 2132 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:33:50.579239 kubelet[2132]: I0430 03:33:50.579234 2132 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:33:50.579595 kubelet[2132]: E0430 03:33:50.579151 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:50.581222 kubelet[2132]: I0430 03:33:50.580693 2132 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:33:50.581222 kubelet[2132]: W0430 03:33:50.580707 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:50.581222 kubelet[2132]: E0430 03:33:50.580761 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:50.581222 kubelet[2132]: E0430 03:33:50.580833 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" Apr 30 03:33:50.581671 kubelet[2132]: I0430 03:33:50.581595 2132 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:33:50.582414 kubelet[2132]: I0430 03:33:50.582378 2132 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:33:50.582894 kubelet[2132]: E0430 03:33:50.582871 2132 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:33:50.584764 kubelet[2132]: I0430 03:33:50.584743 2132 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:33:50.584888 kubelet[2132]: E0430 03:33:50.582366 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183afb39a5d518b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 03:33:50.573402296 +0000 UTC m=+0.551531596,LastTimestamp:2025-04-30 03:33:50.573402296 +0000 UTC m=+0.551531596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 03:33:50.597743 kubelet[2132]: I0430 03:33:50.597678 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:33:50.598002 kubelet[2132]: I0430 03:33:50.597980 2132 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:33:50.598002 kubelet[2132]: I0430 03:33:50.597998 2132 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:33:50.598054 kubelet[2132]: I0430 03:33:50.598018 2132 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:33:50.599904 kubelet[2132]: I0430 03:33:50.599869 2132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:33:50.599971 kubelet[2132]: I0430 03:33:50.599948 2132 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:33:50.600241 kubelet[2132]: I0430 03:33:50.599996 2132 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:33:50.600241 kubelet[2132]: E0430 03:33:50.600049 2132 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:33:50.600987 kubelet[2132]: W0430 03:33:50.600928 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:50.601026 kubelet[2132]: E0430 03:33:50.601000 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:50.680345 kubelet[2132]: E0430 03:33:50.680269 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:50.700696 kubelet[2132]: E0430 03:33:50.700631 2132 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:33:50.781338 kubelet[2132]: E0430 03:33:50.781163 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:50.781720 kubelet[2132]: E0430 03:33:50.781672 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" Apr 30 03:33:50.875197 kubelet[2132]: I0430 03:33:50.875119 2132 policy_none.go:49] "None policy: Start" Apr 30 03:33:50.876204 kubelet[2132]: I0430 03:33:50.876184 2132 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:33:50.876300 kubelet[2132]: I0430 03:33:50.876213 2132 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:33:50.882094 kubelet[2132]: E0430 03:33:50.882051 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:50.900862 kubelet[2132]: E0430 03:33:50.900822 2132 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:33:50.905735 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:33:50.917533 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:33:50.921310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:33:50.934673 kubelet[2132]: I0430 03:33:50.934617 2132 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:33:50.934963 kubelet[2132]: I0430 03:33:50.934937 2132 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:33:50.935057 kubelet[2132]: I0430 03:33:50.934959 2132 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:33:50.935603 kubelet[2132]: I0430 03:33:50.935309 2132 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:33:50.970783 kubelet[2132]: E0430 03:33:50.970737 2132 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 03:33:51.039876 kubelet[2132]: I0430 03:33:51.039841 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:51.040354 kubelet[2132]: E0430 03:33:51.040302 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Apr 30 03:33:51.182710 kubelet[2132]: E0430 03:33:51.182640 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" Apr 30 03:33:51.242844 kubelet[2132]: I0430 03:33:51.242794 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:51.243439 kubelet[2132]: E0430 03:33:51.243372 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Apr 30 03:33:51.311990 systemd[1]: Created slice kubepods-burstable-pod2e089345f6f4833443043cbc2bcd7c29.slice - libcontainer container kubepods-burstable-pod2e089345f6f4833443043cbc2bcd7c29.slice. Apr 30 03:33:51.324729 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. Apr 30 03:33:51.343476 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. Apr 30 03:33:51.385472 kubelet[2132]: I0430 03:33:51.385404 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:33:51.385472 kubelet[2132]: I0430 03:33:51.385469 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:33:51.385472 kubelet[2132]: I0430 03:33:51.385499 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:33:51.385747 kubelet[2132]: I0430 03:33:51.385525 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:33:51.385747 kubelet[2132]: I0430 03:33:51.385550 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:33:51.385747 kubelet[2132]: I0430 03:33:51.385571 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:33:51.385747 kubelet[2132]: I0430 03:33:51.385622 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:33:51.385747 kubelet[2132]: I0430 03:33:51.385643 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:33:51.385869 kubelet[2132]: I0430 03:33:51.385664 2132 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:33:51.486339 kubelet[2132]: W0430 03:33:51.486260 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:51.486339 kubelet[2132]: E0430 03:33:51.486331 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:51.498070 kubelet[2132]: W0430 03:33:51.498018 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:51.498070 kubelet[2132]: E0430 03:33:51.498063 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:51.623170 kubelet[2132]: E0430 03:33:51.623013 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:51.623952 containerd[1463]: time="2025-04-30T03:33:51.623891503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e089345f6f4833443043cbc2bcd7c29,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:51.641182 kubelet[2132]: E0430 03:33:51.641126 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:51.643872 containerd[1463]: time="2025-04-30T03:33:51.643825324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:51.644685 kubelet[2132]: I0430 03:33:51.644668 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:51.645060 kubelet[2132]: E0430 03:33:51.645039 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Apr 30 03:33:51.647265 kubelet[2132]: E0430 03:33:51.647233 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:51.647566 containerd[1463]: time="2025-04-30T03:33:51.647534000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:51.975251 kubelet[2132]: W0430 03:33:51.975063 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:51.975251 kubelet[2132]: E0430 03:33:51.975165 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:51.984278 kubelet[2132]: E0430 03:33:51.984200 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" Apr 30 03:33:52.069488 kubelet[2132]: W0430 03:33:52.069415 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:52.069488 kubelet[2132]: E0430 03:33:52.069484 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:52.446844 kubelet[2132]: I0430 03:33:52.446773 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:52.447409 kubelet[2132]: E0430 03:33:52.447365 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Apr 30 03:33:52.584867 kubelet[2132]: E0430 03:33:52.584695 2132 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183afb39a5d518b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 03:33:50.573402296 +0000 UTC m=+0.551531596,LastTimestamp:2025-04-30 03:33:50.573402296 +0000 UTC m=+0.551531596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 03:33:52.598781 kubelet[2132]: E0430 03:33:52.598723 2132 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:53.585151 kubelet[2132]: E0430 03:33:53.585076 2132 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="3.2s" Apr 30 03:33:53.904056 kubelet[2132]: W0430 03:33:53.903932 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:53.904056 kubelet[2132]: E0430 03:33:53.903981 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:53.939215 kubelet[2132]: W0430 03:33:53.939139 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:53.939215 kubelet[2132]: E0430 03:33:53.939215 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:54.115254 kubelet[2132]: I0430 03:33:54.115187 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:54.115903 kubelet[2132]: E0430 03:33:54.115844 2132 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Apr 30 03:33:54.252972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727676494.mount: Deactivated successfully. Apr 30 03:33:54.261173 kubelet[2132]: W0430 03:33:54.261120 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:54.261265 kubelet[2132]: E0430 03:33:54.261185 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:54.401713 containerd[1463]: time="2025-04-30T03:33:54.401625168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:33:54.414854 containerd[1463]: time="2025-04-30T03:33:54.414762129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:33:54.420318 containerd[1463]: time="2025-04-30T03:33:54.420257286Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:33:54.431031 containerd[1463]: time="2025-04-30T03:33:54.430623661Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:33:54.461781 containerd[1463]: time="2025-04-30T03:33:54.461713206Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:33:54.474786 containerd[1463]: time="2025-04-30T03:33:54.474676221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:33:54.495989 containerd[1463]: time="2025-04-30T03:33:54.495912063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:33:54.496861 containerd[1463]: time="2025-04-30T03:33:54.496792434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.872784874s" Apr 30 03:33:54.516925 containerd[1463]: time="2025-04-30T03:33:54.516736313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:33:54.547683 kubelet[2132]: W0430 03:33:54.547626 2132 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.146:6443: connect: connection refused Apr 30 03:33:54.547683 kubelet[2132]: E0430 03:33:54.547685 2132 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:33:54.619881 containerd[1463]: time="2025-04-30T03:33:54.619813767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.975880852s" Apr 30 03:33:54.620408 containerd[1463]: time="2025-04-30T03:33:54.620369269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.972762352s" Apr 30 03:33:54.882854 containerd[1463]: time="2025-04-30T03:33:54.882750541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:54.883503 containerd[1463]: time="2025-04-30T03:33:54.882897798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:54.886852 containerd[1463]: time="2025-04-30T03:33:54.885181221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.886852 containerd[1463]: time="2025-04-30T03:33:54.885291297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.888106 containerd[1463]: time="2025-04-30T03:33:54.888031517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:54.888106 containerd[1463]: time="2025-04-30T03:33:54.888079637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:54.888244 containerd[1463]: time="2025-04-30T03:33:54.888114162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.888404 containerd[1463]: time="2025-04-30T03:33:54.888354172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.891270 containerd[1463]: time="2025-04-30T03:33:54.890970339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:54.891270 containerd[1463]: time="2025-04-30T03:33:54.891031915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:54.891270 containerd[1463]: time="2025-04-30T03:33:54.891046001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.891270 containerd[1463]: time="2025-04-30T03:33:54.891111414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:54.955801 systemd[1]: Started cri-containerd-b48dc79fddd8b7ce15c0fc85d560dc29b026dd37538cd56556b1dbb60614cc21.scope - libcontainer container b48dc79fddd8b7ce15c0fc85d560dc29b026dd37538cd56556b1dbb60614cc21. Apr 30 03:33:54.961470 systemd[1]: Started cri-containerd-b239b471dcfebeca10f2a9832d97c05f28bcae1c0068c471539348612d66447b.scope - libcontainer container b239b471dcfebeca10f2a9832d97c05f28bcae1c0068c471539348612d66447b. Apr 30 03:33:54.964343 systemd[1]: Started cri-containerd-e186dd74049281e5292cc0f4e623f9065952c9f79210b13562a8788e481f8bcb.scope - libcontainer container e186dd74049281e5292cc0f4e623f9065952c9f79210b13562a8788e481f8bcb. Apr 30 03:33:55.019868 containerd[1463]: time="2025-04-30T03:33:55.019805307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48dc79fddd8b7ce15c0fc85d560dc29b026dd37538cd56556b1dbb60614cc21\"" Apr 30 03:33:55.022049 kubelet[2132]: E0430 03:33:55.022009 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:55.022462 containerd[1463]: time="2025-04-30T03:33:55.022305236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e089345f6f4833443043cbc2bcd7c29,Namespace:kube-system,Attempt:0,} returns sandbox id \"b239b471dcfebeca10f2a9832d97c05f28bcae1c0068c471539348612d66447b\"" Apr 30 03:33:55.023132 kubelet[2132]: E0430 03:33:55.023112 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:55.023234 containerd[1463]: time="2025-04-30T03:33:55.023122900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e186dd74049281e5292cc0f4e623f9065952c9f79210b13562a8788e481f8bcb\"" Apr 30 03:33:55.024649 kubelet[2132]: E0430 03:33:55.024626 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:55.025762 containerd[1463]: time="2025-04-30T03:33:55.025732815Z" level=info msg="CreateContainer within sandbox \"b239b471dcfebeca10f2a9832d97c05f28bcae1c0068c471539348612d66447b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:33:55.025958 containerd[1463]: time="2025-04-30T03:33:55.025747122Z" level=info msg="CreateContainer within sandbox \"b48dc79fddd8b7ce15c0fc85d560dc29b026dd37538cd56556b1dbb60614cc21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:33:55.027056 containerd[1463]: time="2025-04-30T03:33:55.027023907Z" level=info msg="CreateContainer within sandbox \"e186dd74049281e5292cc0f4e623f9065952c9f79210b13562a8788e481f8bcb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:33:55.056036 containerd[1463]: time="2025-04-30T03:33:55.055977365Z" level=info msg="CreateContainer within sandbox \"b48dc79fddd8b7ce15c0fc85d560dc29b026dd37538cd56556b1dbb60614cc21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8b5688eb40881ae366fb0153be7620a7e2288d22f8653c45c2899d66e76af3c\"" Apr 30 03:33:55.056884 containerd[1463]: time="2025-04-30T03:33:55.056836317Z" level=info msg="StartContainer for \"e8b5688eb40881ae366fb0153be7620a7e2288d22f8653c45c2899d66e76af3c\"" Apr 30 03:33:55.064637 containerd[1463]: time="2025-04-30T03:33:55.064574702Z" level=info msg="CreateContainer within sandbox \"b239b471dcfebeca10f2a9832d97c05f28bcae1c0068c471539348612d66447b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6689527565a216caf2bcd0d7d67793eb5253f5037e592e8ed24bcd49c8121d6d\"" Apr 30 03:33:55.065141 containerd[1463]: time="2025-04-30T03:33:55.065122279Z" level=info msg="StartContainer for \"6689527565a216caf2bcd0d7d67793eb5253f5037e592e8ed24bcd49c8121d6d\"" Apr 30 03:33:55.066624 containerd[1463]: time="2025-04-30T03:33:55.066600431Z" level=info msg="CreateContainer within sandbox \"e186dd74049281e5292cc0f4e623f9065952c9f79210b13562a8788e481f8bcb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e269ee1bec83e447e5cc69b1ff039083d16fd6663bff4c6f54e832661d98c91\"" Apr 30 03:33:55.068031 containerd[1463]: time="2025-04-30T03:33:55.066895274Z" level=info msg="StartContainer for \"6e269ee1bec83e447e5cc69b1ff039083d16fd6663bff4c6f54e832661d98c91\"" Apr 30 03:33:55.089013 systemd[1]: Started cri-containerd-e8b5688eb40881ae366fb0153be7620a7e2288d22f8653c45c2899d66e76af3c.scope - libcontainer container e8b5688eb40881ae366fb0153be7620a7e2288d22f8653c45c2899d66e76af3c. Apr 30 03:33:55.094610 systemd[1]: Started cri-containerd-6e269ee1bec83e447e5cc69b1ff039083d16fd6663bff4c6f54e832661d98c91.scope - libcontainer container 6e269ee1bec83e447e5cc69b1ff039083d16fd6663bff4c6f54e832661d98c91. Apr 30 03:33:55.100276 systemd[1]: Started cri-containerd-6689527565a216caf2bcd0d7d67793eb5253f5037e592e8ed24bcd49c8121d6d.scope - libcontainer container 6689527565a216caf2bcd0d7d67793eb5253f5037e592e8ed24bcd49c8121d6d. Apr 30 03:33:55.169516 containerd[1463]: time="2025-04-30T03:33:55.169364818Z" level=info msg="StartContainer for \"6689527565a216caf2bcd0d7d67793eb5253f5037e592e8ed24bcd49c8121d6d\" returns successfully" Apr 30 03:33:55.169761 containerd[1463]: time="2025-04-30T03:33:55.169734551Z" level=info msg="StartContainer for \"e8b5688eb40881ae366fb0153be7620a7e2288d22f8653c45c2899d66e76af3c\" returns successfully" Apr 30 03:33:55.170710 containerd[1463]: time="2025-04-30T03:33:55.170678001Z" level=info msg="StartContainer for \"6e269ee1bec83e447e5cc69b1ff039083d16fd6663bff4c6f54e832661d98c91\" returns successfully" Apr 30 03:33:55.616758 kubelet[2132]: E0430 03:33:55.616702 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:55.622351 kubelet[2132]: E0430 03:33:55.621794 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:55.622351 kubelet[2132]: E0430 03:33:55.622145 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:56.623608 kubelet[2132]: E0430 03:33:56.623555 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:56.788460 kubelet[2132]: E0430 03:33:56.788414 2132 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 03:33:57.267211 kubelet[2132]: E0430 03:33:57.267140 2132 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 03:33:57.317293 kubelet[2132]: I0430 03:33:57.317255 2132 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:33:57.340670 kubelet[2132]: I0430 03:33:57.340603 2132 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 03:33:57.340670 kubelet[2132]: E0430 03:33:57.340652 2132 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 30 03:33:57.418979 kubelet[2132]: E0430 03:33:57.418926 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:57.519690 kubelet[2132]: E0430 03:33:57.519520 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:57.620728 kubelet[2132]: E0430 03:33:57.620667 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:57.624966 kubelet[2132]: E0430 03:33:57.624930 2132 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:33:57.721520 kubelet[2132]: E0430 03:33:57.721460 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:57.822213 kubelet[2132]: E0430 03:33:57.822111 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:57.922862 kubelet[2132]: E0430 03:33:57.922791 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:58.023616 kubelet[2132]: E0430 03:33:58.023544 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:58.123844 kubelet[2132]: E0430 03:33:58.123704 2132 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:33:58.568892 kubelet[2132]: I0430 03:33:58.568833 2132 apiserver.go:52] "Watching apiserver" Apr 30 03:33:58.580115 kubelet[2132]: I0430 03:33:58.580064 2132 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:33:59.981954 systemd[1]: Reloading requested from client PID 2409 ('systemctl') (unit session-7.scope)... Apr 30 03:33:59.981974 systemd[1]: Reloading... Apr 30 03:34:00.116649 zram_generator::config[2448]: No configuration found. Apr 30 03:34:00.250812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:34:00.348513 systemd[1]: Reloading finished in 366 ms. Apr 30 03:34:00.398107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:34:00.419220 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:34:00.419552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:34:00.419632 systemd[1]: kubelet.service: Consumed 1.248s CPU time, 121.8M memory peak, 0B memory swap peak. Apr 30 03:34:00.426984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:34:00.587909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:34:00.593916 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:34:00.640472 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:34:00.640472 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:34:00.640472 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:34:00.641039 kubelet[2493]: I0430 03:34:00.640539 2493 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:34:00.648060 kubelet[2493]: I0430 03:34:00.647994 2493 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 03:34:00.648060 kubelet[2493]: I0430 03:34:00.648030 2493 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:34:00.648404 kubelet[2493]: I0430 03:34:00.648359 2493 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 03:34:00.650244 kubelet[2493]: I0430 03:34:00.650220 2493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:34:00.653753 kubelet[2493]: I0430 03:34:00.653275 2493 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:34:00.657813 kubelet[2493]: E0430 03:34:00.657744 2493 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:34:00.657813 kubelet[2493]: I0430 03:34:00.657809 2493 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:34:00.663835 kubelet[2493]: I0430 03:34:00.663777 2493 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:34:00.663983 kubelet[2493]: I0430 03:34:00.663950 2493 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 03:34:00.664179 kubelet[2493]: I0430 03:34:00.664120 2493 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:34:00.664421 kubelet[2493]: I0430 03:34:00.664163 2493 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:34:00.664421 kubelet[2493]: I0430 03:34:00.664419 2493 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:34:00.664604 kubelet[2493]: I0430 03:34:00.664433 2493 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 03:34:00.664604 kubelet[2493]: I0430 03:34:00.664478 2493 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:34:00.664672 kubelet[2493]: I0430 03:34:00.664664 2493 kubelet.go:408] "Attempting to sync node with API server" Apr 30 03:34:00.664704 kubelet[2493]: I0430 03:34:00.664683 2493 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:34:00.664732 kubelet[2493]: I0430 03:34:00.664725 2493 kubelet.go:314] "Adding apiserver pod source" Apr 30 03:34:00.664773 kubelet[2493]: I0430 03:34:00.664745 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:34:00.666777 kubelet[2493]: I0430 03:34:00.666741 2493 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:34:00.667265 kubelet[2493]: I0430 03:34:00.667238 2493 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:34:00.667800 kubelet[2493]: I0430 03:34:00.667776 2493 server.go:1269] "Started kubelet" Apr 30 03:34:00.668085 kubelet[2493]: I0430 03:34:00.668041 2493 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:34:00.668248 kubelet[2493]: I0430 03:34:00.668185 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:34:00.669600 kubelet[2493]: I0430 03:34:00.668563 2493 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:34:00.669600 kubelet[2493]: I0430 03:34:00.669211 2493 server.go:460] "Adding debug handlers to kubelet server" Apr 30 03:34:00.673619 kubelet[2493]: I0430 03:34:00.673393 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:34:00.677055 kubelet[2493]: I0430 03:34:00.675259 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:34:00.677055 kubelet[2493]: I0430 03:34:00.676783 2493 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 03:34:00.677165 kubelet[2493]: E0430 03:34:00.677065 2493 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:34:00.678619 kubelet[2493]: I0430 03:34:00.678432 2493 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 03:34:00.679394 kubelet[2493]: I0430 03:34:00.679269 2493 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:34:00.684669 kubelet[2493]: I0430 03:34:00.683992 2493 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:34:00.684669 kubelet[2493]: I0430 03:34:00.684122 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:34:00.686563 kubelet[2493]: I0430 03:34:00.686508 2493 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:34:00.690343 kubelet[2493]: E0430 03:34:00.688855 2493 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:34:00.697337 kubelet[2493]: I0430 03:34:00.697143 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:34:00.699325 kubelet[2493]: I0430 03:34:00.699278 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:34:00.699325 kubelet[2493]: I0430 03:34:00.699324 2493 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:34:00.699498 kubelet[2493]: I0430 03:34:00.699345 2493 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 03:34:00.699498 kubelet[2493]: E0430 03:34:00.699392 2493 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:34:00.733879 kubelet[2493]: I0430 03:34:00.733836 2493 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:34:00.733879 kubelet[2493]: I0430 03:34:00.733856 2493 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:34:00.733879 kubelet[2493]: I0430 03:34:00.733883 2493 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:34:00.734057 kubelet[2493]: I0430 03:34:00.734043 2493 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:34:00.734079 kubelet[2493]: I0430 03:34:00.734057 2493 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:34:00.734100 kubelet[2493]: I0430 03:34:00.734083 2493 policy_none.go:49] "None policy: Start" Apr 30 03:34:00.734835 kubelet[2493]: I0430 03:34:00.734805 2493 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:34:00.734835 kubelet[2493]: I0430 03:34:00.734835 2493 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:34:00.735076 kubelet[2493]: I0430 03:34:00.735064 2493 state_mem.go:75] "Updated machine memory state" Apr 30 03:34:00.740571 kubelet[2493]: I0430 03:34:00.740529 2493 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:34:00.741489 kubelet[2493]: I0430 03:34:00.741463 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:34:00.741531 kubelet[2493]: I0430 03:34:00.741482 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:34:00.741737 kubelet[2493]: I0430 03:34:00.741718 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:34:00.847645 kubelet[2493]: I0430 03:34:00.847477 2493 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 03:34:00.928787 kubelet[2493]: I0430 03:34:00.928723 2493 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Apr 30 03:34:00.928956 kubelet[2493]: I0430 03:34:00.928815 2493 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 03:34:01.025850 kubelet[2493]: I0430 03:34:01.025767 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:34:01.025850 kubelet[2493]: I0430 03:34:01.025832 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:34:01.026087 kubelet[2493]: I0430 03:34:01.025877 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:34:01.026087 kubelet[2493]: I0430 03:34:01.025908 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:34:01.026087 kubelet[2493]: I0430 03:34:01.025932 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:34:01.026087 kubelet[2493]: I0430 03:34:01.025992 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:34:01.026087 kubelet[2493]: I0430 03:34:01.026037 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:34:01.026248 kubelet[2493]: I0430 03:34:01.026055 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:34:01.026248 kubelet[2493]: I0430 03:34:01.026071 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e089345f6f4833443043cbc2bcd7c29-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e089345f6f4833443043cbc2bcd7c29\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:34:01.412558 kubelet[2493]: E0430 03:34:01.412439 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.412558 kubelet[2493]: E0430 03:34:01.412477 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.412558 kubelet[2493]: E0430 03:34:01.412518 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.665452 kubelet[2493]: I0430 03:34:01.665299 2493 apiserver.go:52] "Watching apiserver" Apr 30 03:34:01.679958 kubelet[2493]: I0430 03:34:01.679879 2493 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 03:34:01.709345 kubelet[2493]: E0430 03:34:01.709305 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.710236 kubelet[2493]: E0430 03:34:01.710220 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.748278 kubelet[2493]: E0430 03:34:01.745331 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:01.768219 kubelet[2493]: I0430 03:34:01.767985 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.76795786 podStartE2EDuration="1.76795786s" podCreationTimestamp="2025-04-30 03:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:34:01.766943065 +0000 UTC m=+1.165832030" watchObservedRunningTime="2025-04-30 03:34:01.76795786 +0000 UTC m=+1.166846805" Apr 30 03:34:01.832166 kubelet[2493]: I0430 03:34:01.832033 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.832007208 podStartE2EDuration="1.832007208s" podCreationTimestamp="2025-04-30 03:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:34:01.815345608 +0000 UTC m=+1.214234553" watchObservedRunningTime="2025-04-30 03:34:01.832007208 +0000 UTC m=+1.230896153" Apr 30 03:34:01.840690 kubelet[2493]: I0430 03:34:01.840613 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.840575491 podStartE2EDuration="1.840575491s" podCreationTimestamp="2025-04-30 03:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:34:01.832376652 +0000 UTC m=+1.231265607" watchObservedRunningTime="2025-04-30 03:34:01.840575491 +0000 UTC m=+1.239464436" Apr 30 03:34:02.710892 kubelet[2493]: E0430 03:34:02.710824 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:02.711400 kubelet[2493]: E0430 03:34:02.711362 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:03.715681 kubelet[2493]: E0430 03:34:03.715643 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:04.077099 kubelet[2493]: E0430 03:34:04.077060 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:04.670537 kubelet[2493]: I0430 03:34:04.670495 2493 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:34:04.671172 containerd[1463]: time="2025-04-30T03:34:04.671121784Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:34:04.671528 kubelet[2493]: I0430 03:34:04.671348 2493 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:34:05.535472 systemd[1]: Created slice kubepods-besteffort-pod7fb931de_6c82_4604_bdf8_b52dcd7c2765.slice - libcontainer container kubepods-besteffort-pod7fb931de_6c82_4604_bdf8_b52dcd7c2765.slice. Apr 30 03:34:05.571808 kubelet[2493]: I0430 03:34:05.571763 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fb931de-6c82-4604-bdf8-b52dcd7c2765-kube-proxy\") pod \"kube-proxy-sx5nt\" (UID: \"7fb931de-6c82-4604-bdf8-b52dcd7c2765\") " pod="kube-system/kube-proxy-sx5nt" Apr 30 03:34:05.572253 kubelet[2493]: I0430 03:34:05.571810 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fb931de-6c82-4604-bdf8-b52dcd7c2765-lib-modules\") pod \"kube-proxy-sx5nt\" (UID: \"7fb931de-6c82-4604-bdf8-b52dcd7c2765\") " pod="kube-system/kube-proxy-sx5nt" Apr 30 03:34:05.572253 kubelet[2493]: I0430 03:34:05.571843 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgss8\" (UniqueName: \"kubernetes.io/projected/7fb931de-6c82-4604-bdf8-b52dcd7c2765-kube-api-access-cgss8\") pod \"kube-proxy-sx5nt\" (UID: \"7fb931de-6c82-4604-bdf8-b52dcd7c2765\") " pod="kube-system/kube-proxy-sx5nt" Apr 30 03:34:05.572253 kubelet[2493]: I0430 03:34:05.571871 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fb931de-6c82-4604-bdf8-b52dcd7c2765-xtables-lock\") pod \"kube-proxy-sx5nt\" (UID: \"7fb931de-6c82-4604-bdf8-b52dcd7c2765\") " pod="kube-system/kube-proxy-sx5nt" Apr 30 03:34:05.910613 kubelet[2493]: E0430 03:34:05.910540 2493 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 03:34:05.910613 kubelet[2493]: E0430 03:34:05.910608 2493 projected.go:194] Error preparing data for projected volume kube-api-access-cgss8 for pod kube-system/kube-proxy-sx5nt: configmap "kube-root-ca.crt" not found Apr 30 03:34:05.910801 kubelet[2493]: E0430 03:34:05.910696 2493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fb931de-6c82-4604-bdf8-b52dcd7c2765-kube-api-access-cgss8 podName:7fb931de-6c82-4604-bdf8-b52dcd7c2765 nodeName:}" failed. No retries permitted until 2025-04-30 03:34:06.410661287 +0000 UTC m=+5.809550232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgss8" (UniqueName: "kubernetes.io/projected/7fb931de-6c82-4604-bdf8-b52dcd7c2765-kube-api-access-cgss8") pod "kube-proxy-sx5nt" (UID: "7fb931de-6c82-4604-bdf8-b52dcd7c2765") : configmap "kube-root-ca.crt" not found Apr 30 03:34:06.001378 systemd[1]: Created slice kubepods-besteffort-pod8f39aa90_215d_440a_9825_1423abd66484.slice - libcontainer container kubepods-besteffort-pod8f39aa90_215d_440a_9825_1423abd66484.slice. Apr 30 03:34:06.077044 kubelet[2493]: I0430 03:34:06.076969 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6m5\" (UniqueName: \"kubernetes.io/projected/8f39aa90-215d-440a-9825-1423abd66484-kube-api-access-cg6m5\") pod \"tigera-operator-6f6897fdc5-zsw6d\" (UID: \"8f39aa90-215d-440a-9825-1423abd66484\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zsw6d" Apr 30 03:34:06.077044 kubelet[2493]: I0430 03:34:06.077043 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f39aa90-215d-440a-9825-1423abd66484-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-zsw6d\" (UID: \"8f39aa90-215d-440a-9825-1423abd66484\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zsw6d" Apr 30 03:34:06.310986 containerd[1463]: time="2025-04-30T03:34:06.310920227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zsw6d,Uid:8f39aa90-215d-440a-9825-1423abd66484,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:34:06.557645 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 30 03:34:06.562657 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:06.567046 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:47390.service: Deactivated successfully. Apr 30 03:34:06.569292 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:34:06.569533 systemd[1]: session-7.scope: Consumed 4.727s CPU time, 157.6M memory peak, 0B memory swap peak. Apr 30 03:34:06.570032 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:34:06.571187 systemd-logind[1444]: Removed session 7. Apr 30 03:34:06.669322 containerd[1463]: time="2025-04-30T03:34:06.669238932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:34:06.669894 containerd[1463]: time="2025-04-30T03:34:06.669846154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:34:06.669894 containerd[1463]: time="2025-04-30T03:34:06.669868627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:06.670123 containerd[1463]: time="2025-04-30T03:34:06.669968947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:06.692743 systemd[1]: Started cri-containerd-6ed0161331688f8b6dacdf48a03ff16ca2b8bb771c4ee133a62184acc8909270.scope - libcontainer container 6ed0161331688f8b6dacdf48a03ff16ca2b8bb771c4ee133a62184acc8909270. Apr 30 03:34:06.734688 containerd[1463]: time="2025-04-30T03:34:06.734622304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zsw6d,Uid:8f39aa90-215d-440a-9825-1423abd66484,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6ed0161331688f8b6dacdf48a03ff16ca2b8bb771c4ee133a62184acc8909270\"" Apr 30 03:34:06.736754 containerd[1463]: time="2025-04-30T03:34:06.736700808Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:34:06.747046 kubelet[2493]: E0430 03:34:06.746991 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:06.747549 containerd[1463]: time="2025-04-30T03:34:06.747514079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sx5nt,Uid:7fb931de-6c82-4604-bdf8-b52dcd7c2765,Namespace:kube-system,Attempt:0,}" Apr 30 03:34:06.812134 containerd[1463]: time="2025-04-30T03:34:06.811981775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:34:06.812134 containerd[1463]: time="2025-04-30T03:34:06.812069059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:34:06.812134 containerd[1463]: time="2025-04-30T03:34:06.812085341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:06.812370 containerd[1463]: time="2025-04-30T03:34:06.812183898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:06.831847 systemd[1]: Started cri-containerd-f38c6efadf4badae38b04d2ffce300b8198c91ebea97e1661e5875429a99b68c.scope - libcontainer container f38c6efadf4badae38b04d2ffce300b8198c91ebea97e1661e5875429a99b68c. Apr 30 03:34:06.856822 containerd[1463]: time="2025-04-30T03:34:06.856776263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sx5nt,Uid:7fb931de-6c82-4604-bdf8-b52dcd7c2765,Namespace:kube-system,Attempt:0,} returns sandbox id \"f38c6efadf4badae38b04d2ffce300b8198c91ebea97e1661e5875429a99b68c\"" Apr 30 03:34:06.857715 kubelet[2493]: E0430 03:34:06.857684 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:06.860165 containerd[1463]: time="2025-04-30T03:34:06.860113146Z" level=info msg="CreateContainer within sandbox \"f38c6efadf4badae38b04d2ffce300b8198c91ebea97e1661e5875429a99b68c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:34:06.889498 containerd[1463]: time="2025-04-30T03:34:06.889416697Z" level=info msg="CreateContainer within sandbox \"f38c6efadf4badae38b04d2ffce300b8198c91ebea97e1661e5875429a99b68c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60b745b86fab307e0367d0299652d6de166c53adfae7e31356d8cde154692ea4\"" Apr 30 03:34:06.890428 containerd[1463]: time="2025-04-30T03:34:06.890352964Z" level=info msg="StartContainer for \"60b745b86fab307e0367d0299652d6de166c53adfae7e31356d8cde154692ea4\"" Apr 30 03:34:06.921808 systemd[1]: Started cri-containerd-60b745b86fab307e0367d0299652d6de166c53adfae7e31356d8cde154692ea4.scope - libcontainer container 60b745b86fab307e0367d0299652d6de166c53adfae7e31356d8cde154692ea4. Apr 30 03:34:07.075382 containerd[1463]: time="2025-04-30T03:34:07.075307304Z" level=info msg="StartContainer for \"60b745b86fab307e0367d0299652d6de166c53adfae7e31356d8cde154692ea4\" returns successfully" Apr 30 03:34:07.722859 kubelet[2493]: E0430 03:34:07.722820 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:09.055984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958702015.mount: Deactivated successfully. Apr 30 03:34:09.241693 update_engine[1448]: I20250430 03:34:09.241556 1448 update_attempter.cc:509] Updating boot flags... Apr 30 03:34:09.268680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2841) Apr 30 03:34:09.322786 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2843) Apr 30 03:34:09.800843 containerd[1463]: time="2025-04-30T03:34:09.800769939Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:09.832175 containerd[1463]: time="2025-04-30T03:34:09.832070298Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:34:09.908237 containerd[1463]: time="2025-04-30T03:34:09.908171615Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:09.936300 containerd[1463]: time="2025-04-30T03:34:09.936222160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:09.937289 containerd[1463]: time="2025-04-30T03:34:09.937227964Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.200488943s" Apr 30 03:34:09.937350 containerd[1463]: time="2025-04-30T03:34:09.937287738Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:34:09.939974 containerd[1463]: time="2025-04-30T03:34:09.939938037Z" level=info msg="CreateContainer within sandbox \"6ed0161331688f8b6dacdf48a03ff16ca2b8bb771c4ee133a62184acc8909270\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:34:10.204956 containerd[1463]: time="2025-04-30T03:34:10.204755882Z" level=info msg="CreateContainer within sandbox \"6ed0161331688f8b6dacdf48a03ff16ca2b8bb771c4ee133a62184acc8909270\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"403784fa2eb50f1a560b299901b54633e5027b47cbe47cd5f75bdddfe0307372\"" Apr 30 03:34:10.205709 containerd[1463]: time="2025-04-30T03:34:10.205440588Z" level=info msg="StartContainer for \"403784fa2eb50f1a560b299901b54633e5027b47cbe47cd5f75bdddfe0307372\"" Apr 30 03:34:10.240838 systemd[1]: Started cri-containerd-403784fa2eb50f1a560b299901b54633e5027b47cbe47cd5f75bdddfe0307372.scope - libcontainer container 403784fa2eb50f1a560b299901b54633e5027b47cbe47cd5f75bdddfe0307372. Apr 30 03:34:10.324559 containerd[1463]: time="2025-04-30T03:34:10.324509605Z" level=info msg="StartContainer for \"403784fa2eb50f1a560b299901b54633e5027b47cbe47cd5f75bdddfe0307372\" returns successfully" Apr 30 03:34:10.745526 kubelet[2493]: I0430 03:34:10.745442 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sx5nt" podStartSLOduration=5.745420043 podStartE2EDuration="5.745420043s" podCreationTimestamp="2025-04-30 03:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:34:07.834955121 +0000 UTC m=+7.233844066" watchObservedRunningTime="2025-04-30 03:34:10.745420043 +0000 UTC m=+10.144308988" Apr 30 03:34:12.174058 kubelet[2493]: E0430 03:34:12.174001 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:12.237177 kubelet[2493]: I0430 03:34:12.237091 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-zsw6d" podStartSLOduration=4.034952914 podStartE2EDuration="7.237064733s" podCreationTimestamp="2025-04-30 03:34:05 +0000 UTC" firstStartedPulling="2025-04-30 03:34:06.736245214 +0000 UTC m=+6.135134159" lastFinishedPulling="2025-04-30 03:34:09.938357033 +0000 UTC m=+9.337245978" observedRunningTime="2025-04-30 03:34:10.746293747 +0000 UTC m=+10.145182692" watchObservedRunningTime="2025-04-30 03:34:12.237064733 +0000 UTC m=+11.635953668" Apr 30 03:34:12.380451 kubelet[2493]: E0430 03:34:12.380408 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:12.739045 kubelet[2493]: E0430 03:34:12.739001 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:13.451814 systemd[1]: Created slice kubepods-besteffort-podad4883b7_c5df_4359_b681_9372175ee1c4.slice - libcontainer container kubepods-besteffort-podad4883b7_c5df_4359_b681_9372175ee1c4.slice. Apr 30 03:34:13.520608 kubelet[2493]: I0430 03:34:13.520541 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad4883b7-c5df-4359-b681-9372175ee1c4-tigera-ca-bundle\") pod \"calico-typha-5fc84786b6-2gp8f\" (UID: \"ad4883b7-c5df-4359-b681-9372175ee1c4\") " pod="calico-system/calico-typha-5fc84786b6-2gp8f" Apr 30 03:34:13.520608 kubelet[2493]: I0430 03:34:13.520601 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5lb\" (UniqueName: \"kubernetes.io/projected/ad4883b7-c5df-4359-b681-9372175ee1c4-kube-api-access-wt5lb\") pod \"calico-typha-5fc84786b6-2gp8f\" (UID: \"ad4883b7-c5df-4359-b681-9372175ee1c4\") " pod="calico-system/calico-typha-5fc84786b6-2gp8f" Apr 30 03:34:13.521107 kubelet[2493]: I0430 03:34:13.520619 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ad4883b7-c5df-4359-b681-9372175ee1c4-typha-certs\") pod \"calico-typha-5fc84786b6-2gp8f\" (UID: \"ad4883b7-c5df-4359-b681-9372175ee1c4\") " pod="calico-system/calico-typha-5fc84786b6-2gp8f" Apr 30 03:34:13.842536 systemd[1]: Created slice kubepods-besteffort-podaa4158f9_ce84_423c_bcfa_632767bccf2c.slice - libcontainer container kubepods-besteffort-podaa4158f9_ce84_423c_bcfa_632767bccf2c.slice. Apr 30 03:34:13.924421 kubelet[2493]: I0430 03:34:13.924345 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa4158f9-ce84-423c-bcfa-632767bccf2c-node-certs\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924421 kubelet[2493]: I0430 03:34:13.924394 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-flexvol-driver-host\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924421 kubelet[2493]: I0430 03:34:13.924419 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-policysync\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924421 kubelet[2493]: I0430 03:34:13.924435 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-run-calico\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924752 kubelet[2493]: I0430 03:34:13.924454 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-net-dir\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924752 kubelet[2493]: I0430 03:34:13.924471 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-bin-dir\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924752 kubelet[2493]: I0430 03:34:13.924487 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-xtables-lock\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924752 kubelet[2493]: I0430 03:34:13.924502 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4158f9-ce84-423c-bcfa-632767bccf2c-tigera-ca-bundle\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924752 kubelet[2493]: I0430 03:34:13.924517 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htntj\" (UniqueName: \"kubernetes.io/projected/aa4158f9-ce84-423c-bcfa-632767bccf2c-kube-api-access-htntj\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924942 kubelet[2493]: I0430 03:34:13.924532 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-lib-modules\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924942 kubelet[2493]: I0430 03:34:13.924547 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-lib-calico\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.924942 kubelet[2493]: I0430 03:34:13.924560 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-log-dir\") pod \"calico-node-88dsr\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " pod="calico-system/calico-node-88dsr" Apr 30 03:34:13.950978 kubelet[2493]: E0430 03:34:13.950678 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:14.025187 kubelet[2493]: I0430 03:34:14.025145 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16f36b79-0754-4e9a-854f-8a255aa4e23b-registration-dir\") pod \"csi-node-driver-7pk55\" (UID: \"16f36b79-0754-4e9a-854f-8a255aa4e23b\") " pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:14.025343 kubelet[2493]: I0430 03:34:14.025225 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2bvp\" (UniqueName: \"kubernetes.io/projected/16f36b79-0754-4e9a-854f-8a255aa4e23b-kube-api-access-g2bvp\") pod \"csi-node-driver-7pk55\" (UID: \"16f36b79-0754-4e9a-854f-8a255aa4e23b\") " pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:14.025343 kubelet[2493]: I0430 03:34:14.025247 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16f36b79-0754-4e9a-854f-8a255aa4e23b-varrun\") pod \"csi-node-driver-7pk55\" (UID: \"16f36b79-0754-4e9a-854f-8a255aa4e23b\") " pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:14.025343 kubelet[2493]: I0430 03:34:14.025264 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16f36b79-0754-4e9a-854f-8a255aa4e23b-socket-dir\") pod \"csi-node-driver-7pk55\" (UID: \"16f36b79-0754-4e9a-854f-8a255aa4e23b\") " pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:14.025343 kubelet[2493]: I0430 03:34:14.025315 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16f36b79-0754-4e9a-854f-8a255aa4e23b-kubelet-dir\") pod \"csi-node-driver-7pk55\" (UID: \"16f36b79-0754-4e9a-854f-8a255aa4e23b\") " pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:14.031515 kubelet[2493]: E0430 03:34:14.031451 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.031515 kubelet[2493]: W0430 03:34:14.031504 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.031679 kubelet[2493]: E0430 03:34:14.031536 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.036944 kubelet[2493]: E0430 03:34:14.036917 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.037127 kubelet[2493]: W0430 03:34:14.037051 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.037127 kubelet[2493]: E0430 03:34:14.037083 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.060689 kubelet[2493]: E0430 03:34:14.060629 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:14.061420 containerd[1463]: time="2025-04-30T03:34:14.061358825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fc84786b6-2gp8f,Uid:ad4883b7-c5df-4359-b681-9372175ee1c4,Namespace:calico-system,Attempt:0,}" Apr 30 03:34:14.081993 kubelet[2493]: E0430 03:34:14.081636 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:14.084712 kubelet[2493]: E0430 03:34:14.084052 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.084712 kubelet[2493]: W0430 03:34:14.084073 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.084712 kubelet[2493]: E0430 03:34:14.084091 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.085048 kubelet[2493]: E0430 03:34:14.084913 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.085048 kubelet[2493]: W0430 03:34:14.084926 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.085048 kubelet[2493]: E0430 03:34:14.084937 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.087982 containerd[1463]: time="2025-04-30T03:34:14.087823784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:34:14.088399 kubelet[2493]: E0430 03:34:14.088370 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.088438 kubelet[2493]: W0430 03:34:14.088398 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.088438 kubelet[2493]: E0430 03:34:14.088431 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.088781 kubelet[2493]: E0430 03:34:14.088757 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.088781 kubelet[2493]: W0430 03:34:14.088770 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.088781 kubelet[2493]: E0430 03:34:14.088781 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.089210 containerd[1463]: time="2025-04-30T03:34:14.088976010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:34:14.089210 containerd[1463]: time="2025-04-30T03:34:14.089047114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:14.089282 kubelet[2493]: E0430 03:34:14.089108 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.089282 kubelet[2493]: W0430 03:34:14.089125 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.089282 kubelet[2493]: E0430 03:34:14.089144 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.089363 containerd[1463]: time="2025-04-30T03:34:14.089208310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:14.111818 systemd[1]: Started cri-containerd-c5d91e9ca1d0b8abd09a6b27fca052910e6c7810d637984be910197ecb6339fd.scope - libcontainer container c5d91e9ca1d0b8abd09a6b27fca052910e6c7810d637984be910197ecb6339fd. Apr 30 03:34:14.126206 kubelet[2493]: E0430 03:34:14.126169 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.126206 kubelet[2493]: W0430 03:34:14.126194 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.126699 kubelet[2493]: E0430 03:34:14.126657 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.127134 kubelet[2493]: E0430 03:34:14.127106 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.127206 kubelet[2493]: W0430 03:34:14.127133 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.127206 kubelet[2493]: E0430 03:34:14.127166 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.127646 kubelet[2493]: E0430 03:34:14.127617 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.127646 kubelet[2493]: W0430 03:34:14.127637 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.127767 kubelet[2493]: E0430 03:34:14.127659 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.127970 kubelet[2493]: E0430 03:34:14.127954 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.127970 kubelet[2493]: W0430 03:34:14.127967 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.128043 kubelet[2493]: E0430 03:34:14.127983 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.128249 kubelet[2493]: E0430 03:34:14.128232 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.128291 kubelet[2493]: W0430 03:34:14.128246 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.128291 kubelet[2493]: E0430 03:34:14.128281 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.128540 kubelet[2493]: E0430 03:34:14.128517 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.128540 kubelet[2493]: W0430 03:34:14.128530 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.128639 kubelet[2493]: E0430 03:34:14.128544 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.128807 kubelet[2493]: E0430 03:34:14.128792 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.128876 kubelet[2493]: W0430 03:34:14.128813 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.128914 kubelet[2493]: E0430 03:34:14.128879 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.129131 kubelet[2493]: E0430 03:34:14.129116 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.129170 kubelet[2493]: W0430 03:34:14.129136 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.129240 kubelet[2493]: E0430 03:34:14.129198 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.129485 kubelet[2493]: E0430 03:34:14.129466 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.129523 kubelet[2493]: W0430 03:34:14.129483 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.129706 kubelet[2493]: E0430 03:34:14.129685 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.129850 kubelet[2493]: E0430 03:34:14.129811 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.129850 kubelet[2493]: W0430 03:34:14.129835 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.129993 kubelet[2493]: E0430 03:34:14.129972 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.130322 kubelet[2493]: E0430 03:34:14.130295 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.130322 kubelet[2493]: W0430 03:34:14.130315 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.130405 kubelet[2493]: E0430 03:34:14.130368 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.130693 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.131921 kubelet[2493]: W0430 03:34:14.130737 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.130820 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.131094 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.131921 kubelet[2493]: W0430 03:34:14.131103 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.131174 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.131455 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.131921 kubelet[2493]: W0430 03:34:14.131463 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.131921 kubelet[2493]: E0430 03:34:14.131539 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.132224 kubelet[2493]: E0430 03:34:14.131936 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.132224 kubelet[2493]: W0430 03:34:14.132024 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.132224 kubelet[2493]: E0430 03:34:14.132147 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133079 kubelet[2493]: E0430 03:34:14.132407 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133079 kubelet[2493]: W0430 03:34:14.132422 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.133079 kubelet[2493]: E0430 03:34:14.132488 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133079 kubelet[2493]: E0430 03:34:14.132796 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133079 kubelet[2493]: W0430 03:34:14.132805 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.133079 kubelet[2493]: E0430 03:34:14.132946 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133217 kubelet[2493]: E0430 03:34:14.133090 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133217 kubelet[2493]: W0430 03:34:14.133099 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.133264 kubelet[2493]: E0430 03:34:14.133215 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133353 kubelet[2493]: E0430 03:34:14.133327 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133353 kubelet[2493]: W0430 03:34:14.133338 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.133412 kubelet[2493]: E0430 03:34:14.133363 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133663 kubelet[2493]: E0430 03:34:14.133642 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133663 kubelet[2493]: W0430 03:34:14.133654 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.133796 kubelet[2493]: E0430 03:34:14.133780 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.133905 kubelet[2493]: E0430 03:34:14.133890 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.133905 kubelet[2493]: W0430 03:34:14.133901 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.134011 kubelet[2493]: E0430 03:34:14.133997 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.134208 kubelet[2493]: E0430 03:34:14.134195 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.134208 kubelet[2493]: W0430 03:34:14.134206 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.134267 kubelet[2493]: E0430 03:34:14.134226 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.134720 kubelet[2493]: E0430 03:34:14.134669 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.134720 kubelet[2493]: W0430 03:34:14.134710 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.134791 kubelet[2493]: E0430 03:34:14.134728 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.135089 kubelet[2493]: E0430 03:34:14.135065 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.135089 kubelet[2493]: W0430 03:34:14.135078 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.135089 kubelet[2493]: E0430 03:34:14.135090 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.135418 kubelet[2493]: E0430 03:34:14.135397 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.135418 kubelet[2493]: W0430 03:34:14.135410 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.135418 kubelet[2493]: E0430 03:34:14.135420 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.142408 kubelet[2493]: E0430 03:34:14.142365 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:14.142408 kubelet[2493]: W0430 03:34:14.142388 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:14.142408 kubelet[2493]: E0430 03:34:14.142403 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:14.146539 kubelet[2493]: E0430 03:34:14.146510 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:14.147559 containerd[1463]: time="2025-04-30T03:34:14.147454418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88dsr,Uid:aa4158f9-ce84-423c-bcfa-632767bccf2c,Namespace:calico-system,Attempt:0,}" Apr 30 03:34:14.157688 containerd[1463]: time="2025-04-30T03:34:14.157618637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fc84786b6-2gp8f,Uid:ad4883b7-c5df-4359-b681-9372175ee1c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5d91e9ca1d0b8abd09a6b27fca052910e6c7810d637984be910197ecb6339fd\"" Apr 30 03:34:14.158883 kubelet[2493]: E0430 03:34:14.158854 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:14.161068 containerd[1463]: time="2025-04-30T03:34:14.160720065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:34:14.344544 containerd[1463]: time="2025-04-30T03:34:14.344395566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:34:14.344544 containerd[1463]: time="2025-04-30T03:34:14.344473914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:34:14.344544 containerd[1463]: time="2025-04-30T03:34:14.344486597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:14.344815 containerd[1463]: time="2025-04-30T03:34:14.344578621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:14.369790 systemd[1]: Started cri-containerd-e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4.scope - libcontainer container e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4. Apr 30 03:34:14.395811 containerd[1463]: time="2025-04-30T03:34:14.395770215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88dsr,Uid:aa4158f9-ce84-423c-bcfa-632767bccf2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\"" Apr 30 03:34:14.396548 kubelet[2493]: E0430 03:34:14.396523 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:15.699836 kubelet[2493]: E0430 03:34:15.699764 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:16.093173 containerd[1463]: time="2025-04-30T03:34:16.093098093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:16.094159 containerd[1463]: time="2025-04-30T03:34:16.094104221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:34:16.095650 containerd[1463]: time="2025-04-30T03:34:16.095608070Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:16.098112 containerd[1463]: time="2025-04-30T03:34:16.098063845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:16.098888 containerd[1463]: time="2025-04-30T03:34:16.098848064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.938041706s" Apr 30 03:34:16.098944 containerd[1463]: time="2025-04-30T03:34:16.098895174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:34:16.100234 containerd[1463]: time="2025-04-30T03:34:16.100053429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:34:16.108540 containerd[1463]: time="2025-04-30T03:34:16.108166741Z" level=info msg="CreateContainer within sandbox \"c5d91e9ca1d0b8abd09a6b27fca052910e6c7810d637984be910197ecb6339fd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:34:16.126339 containerd[1463]: time="2025-04-30T03:34:16.126293201Z" level=info msg="CreateContainer within sandbox \"c5d91e9ca1d0b8abd09a6b27fca052910e6c7810d637984be910197ecb6339fd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cd66a5a01d557a587f76294beabc6cec4ee0e211330f129d2ac78e9b9f5a56ad\"" Apr 30 03:34:16.127148 containerd[1463]: time="2025-04-30T03:34:16.127016185Z" level=info msg="StartContainer for \"cd66a5a01d557a587f76294beabc6cec4ee0e211330f129d2ac78e9b9f5a56ad\"" Apr 30 03:34:16.160740 systemd[1]: Started cri-containerd-cd66a5a01d557a587f76294beabc6cec4ee0e211330f129d2ac78e9b9f5a56ad.scope - libcontainer container cd66a5a01d557a587f76294beabc6cec4ee0e211330f129d2ac78e9b9f5a56ad. Apr 30 03:34:16.203230 containerd[1463]: time="2025-04-30T03:34:16.203099586Z" level=info msg="StartContainer for \"cd66a5a01d557a587f76294beabc6cec4ee0e211330f129d2ac78e9b9f5a56ad\" returns successfully" Apr 30 03:34:16.750047 kubelet[2493]: E0430 03:34:16.749606 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:16.761974 kubelet[2493]: I0430 03:34:16.761798 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fc84786b6-2gp8f" podStartSLOduration=1.82196083 podStartE2EDuration="3.761780683s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="2025-04-30 03:34:14.159966302 +0000 UTC m=+13.558855247" lastFinishedPulling="2025-04-30 03:34:16.099786145 +0000 UTC m=+15.498675100" observedRunningTime="2025-04-30 03:34:16.761472702 +0000 UTC m=+16.160361638" watchObservedRunningTime="2025-04-30 03:34:16.761780683 +0000 UTC m=+16.160669628" Apr 30 03:34:16.807313 kubelet[2493]: E0430 03:34:16.807275 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.807313 kubelet[2493]: W0430 03:34:16.807298 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.807500 kubelet[2493]: E0430 03:34:16.807328 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.807722 kubelet[2493]: E0430 03:34:16.807699 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.807722 kubelet[2493]: W0430 03:34:16.807713 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.807722 kubelet[2493]: E0430 03:34:16.807722 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.807989 kubelet[2493]: E0430 03:34:16.807971 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.807989 kubelet[2493]: W0430 03:34:16.807983 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.807989 kubelet[2493]: E0430 03:34:16.807991 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.808233 kubelet[2493]: E0430 03:34:16.808220 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.808233 kubelet[2493]: W0430 03:34:16.808230 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.808282 kubelet[2493]: E0430 03:34:16.808238 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.808431 kubelet[2493]: E0430 03:34:16.808417 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.808431 kubelet[2493]: W0430 03:34:16.808428 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.808612 kubelet[2493]: E0430 03:34:16.808597 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.808807 kubelet[2493]: E0430 03:34:16.808791 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.808807 kubelet[2493]: W0430 03:34:16.808802 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.808878 kubelet[2493]: E0430 03:34:16.808823 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.809040 kubelet[2493]: E0430 03:34:16.809027 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.809040 kubelet[2493]: W0430 03:34:16.809036 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.809091 kubelet[2493]: E0430 03:34:16.809044 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.809247 kubelet[2493]: E0430 03:34:16.809234 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.809247 kubelet[2493]: W0430 03:34:16.809244 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.809291 kubelet[2493]: E0430 03:34:16.809251 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.809462 kubelet[2493]: E0430 03:34:16.809449 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.809462 kubelet[2493]: W0430 03:34:16.809458 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.809506 kubelet[2493]: E0430 03:34:16.809466 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.809704 kubelet[2493]: E0430 03:34:16.809687 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.809743 kubelet[2493]: W0430 03:34:16.809731 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.809743 kubelet[2493]: E0430 03:34:16.809741 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.810067 kubelet[2493]: E0430 03:34:16.810050 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.810067 kubelet[2493]: W0430 03:34:16.810062 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.810126 kubelet[2493]: E0430 03:34:16.810071 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.810318 kubelet[2493]: E0430 03:34:16.810304 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.810318 kubelet[2493]: W0430 03:34:16.810315 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.810375 kubelet[2493]: E0430 03:34:16.810324 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.810534 kubelet[2493]: E0430 03:34:16.810521 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.810534 kubelet[2493]: W0430 03:34:16.810531 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.810593 kubelet[2493]: E0430 03:34:16.810540 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.810765 kubelet[2493]: E0430 03:34:16.810752 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.810765 kubelet[2493]: W0430 03:34:16.810762 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.810810 kubelet[2493]: E0430 03:34:16.810769 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.810997 kubelet[2493]: E0430 03:34:16.810983 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.810997 kubelet[2493]: W0430 03:34:16.810994 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.811048 kubelet[2493]: E0430 03:34:16.811003 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.845726 kubelet[2493]: E0430 03:34:16.845644 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.845726 kubelet[2493]: W0430 03:34:16.845673 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.845726 kubelet[2493]: E0430 03:34:16.845696 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.846080 kubelet[2493]: E0430 03:34:16.846034 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.846080 kubelet[2493]: W0430 03:34:16.846061 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.846163 kubelet[2493]: E0430 03:34:16.846088 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.846411 kubelet[2493]: E0430 03:34:16.846370 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.846411 kubelet[2493]: W0430 03:34:16.846389 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.846411 kubelet[2493]: E0430 03:34:16.846408 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.846793 kubelet[2493]: E0430 03:34:16.846746 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.846793 kubelet[2493]: W0430 03:34:16.846774 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.846891 kubelet[2493]: E0430 03:34:16.846826 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.847124 kubelet[2493]: E0430 03:34:16.847087 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.847124 kubelet[2493]: W0430 03:34:16.847101 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.847124 kubelet[2493]: E0430 03:34:16.847116 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.847363 kubelet[2493]: E0430 03:34:16.847325 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.847363 kubelet[2493]: W0430 03:34:16.847341 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.847363 kubelet[2493]: E0430 03:34:16.847357 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.847573 kubelet[2493]: E0430 03:34:16.847542 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.847573 kubelet[2493]: W0430 03:34:16.847553 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.847573 kubelet[2493]: E0430 03:34:16.847564 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.847822 kubelet[2493]: E0430 03:34:16.847779 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.847822 kubelet[2493]: W0430 03:34:16.847789 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.847822 kubelet[2493]: E0430 03:34:16.847802 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.848123 kubelet[2493]: E0430 03:34:16.848086 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.848123 kubelet[2493]: W0430 03:34:16.848100 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.848123 kubelet[2493]: E0430 03:34:16.848115 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.848352 kubelet[2493]: E0430 03:34:16.848330 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.848352 kubelet[2493]: W0430 03:34:16.848340 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.848430 kubelet[2493]: E0430 03:34:16.848372 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.848557 kubelet[2493]: E0430 03:34:16.848534 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.848557 kubelet[2493]: W0430 03:34:16.848544 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.848644 kubelet[2493]: E0430 03:34:16.848571 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.848768 kubelet[2493]: E0430 03:34:16.848744 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.848768 kubelet[2493]: W0430 03:34:16.848755 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.848768 kubelet[2493]: E0430 03:34:16.848768 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.848995 kubelet[2493]: E0430 03:34:16.848972 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.848995 kubelet[2493]: W0430 03:34:16.848982 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.848995 kubelet[2493]: E0430 03:34:16.848996 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.849231 kubelet[2493]: E0430 03:34:16.849208 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.849231 kubelet[2493]: W0430 03:34:16.849220 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.849311 kubelet[2493]: E0430 03:34:16.849236 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.849506 kubelet[2493]: E0430 03:34:16.849486 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.849506 kubelet[2493]: W0430 03:34:16.849495 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.849506 kubelet[2493]: E0430 03:34:16.849508 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.849761 kubelet[2493]: E0430 03:34:16.849738 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.849761 kubelet[2493]: W0430 03:34:16.849750 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.849761 kubelet[2493]: E0430 03:34:16.849764 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.849994 kubelet[2493]: E0430 03:34:16.849973 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.849994 kubelet[2493]: W0430 03:34:16.849983 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.849994 kubelet[2493]: E0430 03:34:16.849991 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:16.850603 kubelet[2493]: E0430 03:34:16.850566 2493 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:34:16.850603 kubelet[2493]: W0430 03:34:16.850593 2493 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:34:16.850603 kubelet[2493]: E0430 03:34:16.850602 2493 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:34:17.499534 containerd[1463]: time="2025-04-30T03:34:17.499450786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:17.500373 containerd[1463]: time="2025-04-30T03:34:17.500329795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:34:17.501767 containerd[1463]: time="2025-04-30T03:34:17.501698036Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:17.503951 containerd[1463]: time="2025-04-30T03:34:17.503889590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:17.504356 containerd[1463]: time="2025-04-30T03:34:17.504311556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.404223392s" Apr 30 03:34:17.504356 containerd[1463]: time="2025-04-30T03:34:17.504346031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:34:17.506797 containerd[1463]: time="2025-04-30T03:34:17.506754456Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:34:17.524446 containerd[1463]: time="2025-04-30T03:34:17.524378524Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\"" Apr 30 03:34:17.525189 containerd[1463]: time="2025-04-30T03:34:17.524904697Z" level=info msg="StartContainer for \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\"" Apr 30 03:34:17.555742 systemd[1]: Started cri-containerd-c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa.scope - libcontainer container c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa. Apr 30 03:34:17.592314 containerd[1463]: time="2025-04-30T03:34:17.592243755Z" level=info msg="StartContainer for \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\" returns successfully" Apr 30 03:34:17.608616 systemd[1]: cri-containerd-c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa.scope: Deactivated successfully. Apr 30 03:34:17.700496 kubelet[2493]: E0430 03:34:17.700431 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:17.753259 kubelet[2493]: E0430 03:34:17.753102 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:17.753259 kubelet[2493]: E0430 03:34:17.753215 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:17.863555 containerd[1463]: time="2025-04-30T03:34:17.863426878Z" level=info msg="shim disconnected" id=c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa namespace=k8s.io Apr 30 03:34:17.863555 containerd[1463]: time="2025-04-30T03:34:17.863531445Z" level=warning msg="cleaning up after shim disconnected" id=c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa namespace=k8s.io Apr 30 03:34:17.863555 containerd[1463]: time="2025-04-30T03:34:17.863546152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:18.105913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa-rootfs.mount: Deactivated successfully. Apr 30 03:34:18.756266 kubelet[2493]: E0430 03:34:18.756237 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:18.756735 kubelet[2493]: E0430 03:34:18.756333 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:18.757454 containerd[1463]: time="2025-04-30T03:34:18.757409013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:34:19.700050 kubelet[2493]: E0430 03:34:19.699963 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:21.700071 kubelet[2493]: E0430 03:34:21.700017 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:22.328650 containerd[1463]: time="2025-04-30T03:34:22.328572203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:22.329506 containerd[1463]: time="2025-04-30T03:34:22.329424377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:34:22.330751 containerd[1463]: time="2025-04-30T03:34:22.330702805Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:22.332940 containerd[1463]: time="2025-04-30T03:34:22.332896147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:22.333596 containerd[1463]: time="2025-04-30T03:34:22.333527527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.576076694s" Apr 30 03:34:22.333596 containerd[1463]: time="2025-04-30T03:34:22.333574395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:34:22.336485 containerd[1463]: time="2025-04-30T03:34:22.336424302Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:34:22.362569 containerd[1463]: time="2025-04-30T03:34:22.362500900Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\"" Apr 30 03:34:22.363568 containerd[1463]: time="2025-04-30T03:34:22.363518697Z" level=info msg="StartContainer for \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\"" Apr 30 03:34:22.400767 systemd[1]: Started cri-containerd-4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2.scope - libcontainer container 4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2. Apr 30 03:34:22.511612 containerd[1463]: time="2025-04-30T03:34:22.511500470Z" level=info msg="StartContainer for \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\" returns successfully" Apr 30 03:34:23.043912 kubelet[2493]: E0430 03:34:23.043317 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:23.775692 kubelet[2493]: E0430 03:34:23.775553 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:24.041677 kubelet[2493]: E0430 03:34:24.041574 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:24.226424 systemd[1]: cri-containerd-4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2.scope: Deactivated successfully. Apr 30 03:34:24.250244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2-rootfs.mount: Deactivated successfully. Apr 30 03:34:24.317406 kubelet[2493]: I0430 03:34:24.317276 2493 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 03:34:24.468817 containerd[1463]: time="2025-04-30T03:34:24.468329622Z" level=info msg="shim disconnected" id=4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2 namespace=k8s.io Apr 30 03:34:24.468817 containerd[1463]: time="2025-04-30T03:34:24.468405986Z" level=warning msg="cleaning up after shim disconnected" id=4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2 namespace=k8s.io Apr 30 03:34:24.468817 containerd[1463]: time="2025-04-30T03:34:24.468419322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:24.475148 systemd[1]: Created slice kubepods-burstable-podcf00a3ce_f78a_4b3e_b1fb_0e2e59ff5a32.slice - libcontainer container kubepods-burstable-podcf00a3ce_f78a_4b3e_b1fb_0e2e59ff5a32.slice. Apr 30 03:34:24.486514 containerd[1463]: time="2025-04-30T03:34:24.486452320Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:34:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:34:24.491937 systemd[1]: Created slice kubepods-besteffort-pod0a7a292c_5ed8_4ffb_8ca7_ff54dcfc3281.slice - libcontainer container kubepods-besteffort-pod0a7a292c_5ed8_4ffb_8ca7_ff54dcfc3281.slice. Apr 30 03:34:24.494277 kubelet[2493]: I0430 03:34:24.494230 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/241582b1-4172-41e0-a757-624f1063d729-calico-apiserver-certs\") pod \"calico-apiserver-6fc8df768b-5gqn4\" (UID: \"241582b1-4172-41e0-a757-624f1063d729\") " pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" Apr 30 03:34:24.494277 kubelet[2493]: I0430 03:34:24.494272 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32-config-volume\") pod \"coredns-6f6b679f8f-7jcld\" (UID: \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\") " pod="kube-system/coredns-6f6b679f8f-7jcld" Apr 30 03:34:24.494431 kubelet[2493]: I0430 03:34:24.494294 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqb7\" (UniqueName: \"kubernetes.io/projected/cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32-kube-api-access-2xqb7\") pod \"coredns-6f6b679f8f-7jcld\" (UID: \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\") " pod="kube-system/coredns-6f6b679f8f-7jcld" Apr 30 03:34:24.494431 kubelet[2493]: I0430 03:34:24.494317 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ddbf64df-5b81-40d9-b056-7dac1c53f65d-calico-apiserver-certs\") pod \"calico-apiserver-6fc8df768b-znvmt\" (UID: \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\") " pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" Apr 30 03:34:24.494431 kubelet[2493]: I0430 03:34:24.494336 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f3172f0-7cdf-426e-b2bc-b5e5053a3b93-config-volume\") pod \"coredns-6f6b679f8f-sw6pw\" (UID: \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\") " pod="kube-system/coredns-6f6b679f8f-sw6pw" Apr 30 03:34:24.494431 kubelet[2493]: I0430 03:34:24.494359 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntp5h\" (UniqueName: \"kubernetes.io/projected/0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281-kube-api-access-ntp5h\") pod \"calico-kube-controllers-669b88b944-thr8d\" (UID: \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\") " pod="calico-system/calico-kube-controllers-669b88b944-thr8d" Apr 30 03:34:24.494431 kubelet[2493]: I0430 03:34:24.494382 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhvb\" (UniqueName: \"kubernetes.io/projected/5f3172f0-7cdf-426e-b2bc-b5e5053a3b93-kube-api-access-2zhvb\") pod \"coredns-6f6b679f8f-sw6pw\" (UID: \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\") " pod="kube-system/coredns-6f6b679f8f-sw6pw" Apr 30 03:34:24.494617 kubelet[2493]: I0430 03:34:24.494404 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4sz4\" (UniqueName: \"kubernetes.io/projected/241582b1-4172-41e0-a757-624f1063d729-kube-api-access-g4sz4\") pod \"calico-apiserver-6fc8df768b-5gqn4\" (UID: \"241582b1-4172-41e0-a757-624f1063d729\") " pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" Apr 30 03:34:24.494617 kubelet[2493]: I0430 03:34:24.494422 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281-tigera-ca-bundle\") pod \"calico-kube-controllers-669b88b944-thr8d\" (UID: \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\") " pod="calico-system/calico-kube-controllers-669b88b944-thr8d" Apr 30 03:34:24.494617 kubelet[2493]: I0430 03:34:24.494442 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9df\" (UniqueName: \"kubernetes.io/projected/ddbf64df-5b81-40d9-b056-7dac1c53f65d-kube-api-access-8b9df\") pod \"calico-apiserver-6fc8df768b-znvmt\" (UID: \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\") " pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" Apr 30 03:34:24.497990 systemd[1]: Created slice kubepods-burstable-pod5f3172f0_7cdf_426e_b2bc_b5e5053a3b93.slice - libcontainer container kubepods-burstable-pod5f3172f0_7cdf_426e_b2bc_b5e5053a3b93.slice. Apr 30 03:34:24.506141 systemd[1]: Created slice kubepods-besteffort-pod241582b1_4172_41e0_a757_624f1063d729.slice - libcontainer container kubepods-besteffort-pod241582b1_4172_41e0_a757_624f1063d729.slice. Apr 30 03:34:24.512618 systemd[1]: Created slice kubepods-besteffort-podddbf64df_5b81_40d9_b056_7dac1c53f65d.slice - libcontainer container kubepods-besteffort-podddbf64df_5b81_40d9_b056_7dac1c53f65d.slice. Apr 30 03:34:24.783136 kubelet[2493]: E0430 03:34:24.782980 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:24.784143 containerd[1463]: time="2025-04-30T03:34:24.784025316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7jcld,Uid:cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32,Namespace:kube-system,Attempt:0,}" Apr 30 03:34:24.796508 containerd[1463]: time="2025-04-30T03:34:24.796460407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669b88b944-thr8d,Uid:0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281,Namespace:calico-system,Attempt:0,}" Apr 30 03:34:24.802836 kubelet[2493]: E0430 03:34:24.802789 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:24.803352 containerd[1463]: time="2025-04-30T03:34:24.803307837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw6pw,Uid:5f3172f0-7cdf-426e-b2bc-b5e5053a3b93,Namespace:kube-system,Attempt:0,}" Apr 30 03:34:24.810690 containerd[1463]: time="2025-04-30T03:34:24.810571893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-5gqn4,Uid:241582b1-4172-41e0-a757-624f1063d729,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:34:24.815804 containerd[1463]: time="2025-04-30T03:34:24.815765550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-znvmt,Uid:ddbf64df-5b81-40d9-b056-7dac1c53f65d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:34:24.914167 containerd[1463]: time="2025-04-30T03:34:24.914100898Z" level=error msg="Failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.915492 containerd[1463]: time="2025-04-30T03:34:24.915337917Z" level=error msg="encountered an error cleaning up failed sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.915492 containerd[1463]: time="2025-04-30T03:34:24.915392951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7jcld,Uid:cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.915867 kubelet[2493]: E0430 03:34:24.915815 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.916027 kubelet[2493]: E0430 03:34:24.915997 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7jcld" Apr 30 03:34:24.916128 kubelet[2493]: E0430 03:34:24.916064 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7jcld" Apr 30 03:34:24.916321 kubelet[2493]: E0430 03:34:24.916234 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7jcld_kube-system(cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7jcld_kube-system(cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7jcld" podUID="cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32" Apr 30 03:34:24.918877 containerd[1463]: time="2025-04-30T03:34:24.918713672Z" level=error msg="Failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.919344 containerd[1463]: time="2025-04-30T03:34:24.919312500Z" level=error msg="encountered an error cleaning up failed sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.919472 containerd[1463]: time="2025-04-30T03:34:24.919443957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669b88b944-thr8d,Uid:0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.920613 kubelet[2493]: E0430 03:34:24.919897 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.920613 kubelet[2493]: E0430 03:34:24.919983 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" Apr 30 03:34:24.920613 kubelet[2493]: E0430 03:34:24.920014 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" Apr 30 03:34:24.920770 kubelet[2493]: E0430 03:34:24.920069 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669b88b944-thr8d_calico-system(0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669b88b944-thr8d_calico-system(0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:34:24.943844 containerd[1463]: time="2025-04-30T03:34:24.943774913Z" level=error msg="Failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944144 containerd[1463]: time="2025-04-30T03:34:24.943790302Z" level=error msg="Failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944329 containerd[1463]: time="2025-04-30T03:34:24.944291285Z" level=error msg="encountered an error cleaning up failed sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944392 containerd[1463]: time="2025-04-30T03:34:24.944361026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw6pw,Uid:5f3172f0-7cdf-426e-b2bc-b5e5053a3b93,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944707 containerd[1463]: time="2025-04-30T03:34:24.944562486Z" level=error msg="encountered an error cleaning up failed sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944707 containerd[1463]: time="2025-04-30T03:34:24.944632086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-znvmt,Uid:ddbf64df-5b81-40d9-b056-7dac1c53f65d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944951 kubelet[2493]: E0430 03:34:24.944687 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944951 kubelet[2493]: E0430 03:34:24.944787 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sw6pw" Apr 30 03:34:24.944951 kubelet[2493]: E0430 03:34:24.944768 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.944951 kubelet[2493]: E0430 03:34:24.944854 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" Apr 30 03:34:24.945107 kubelet[2493]: E0430 03:34:24.944871 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" Apr 30 03:34:24.945107 kubelet[2493]: E0430 03:34:24.944823 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-sw6pw" Apr 30 03:34:24.945107 kubelet[2493]: E0430 03:34:24.944912 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc8df768b-znvmt_calico-apiserver(ddbf64df-5b81-40d9-b056-7dac1c53f65d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc8df768b-znvmt_calico-apiserver(ddbf64df-5b81-40d9-b056-7dac1c53f65d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:34:24.945218 kubelet[2493]: E0430 03:34:24.944946 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-sw6pw_kube-system(5f3172f0-7cdf-426e-b2bc-b5e5053a3b93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-sw6pw_kube-system(5f3172f0-7cdf-426e-b2bc-b5e5053a3b93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw6pw" podUID="5f3172f0-7cdf-426e-b2bc-b5e5053a3b93" Apr 30 03:34:24.947129 containerd[1463]: time="2025-04-30T03:34:24.947086888Z" level=error msg="Failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.947426 containerd[1463]: time="2025-04-30T03:34:24.947397733Z" level=error msg="encountered an error cleaning up failed sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.947497 containerd[1463]: time="2025-04-30T03:34:24.947448078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-5gqn4,Uid:241582b1-4172-41e0-a757-624f1063d729,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.947650 kubelet[2493]: E0430 03:34:24.947622 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:24.947723 kubelet[2493]: E0430 03:34:24.947660 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" Apr 30 03:34:24.947723 kubelet[2493]: E0430 03:34:24.947683 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" Apr 30 03:34:24.947796 kubelet[2493]: E0430 03:34:24.947729 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc8df768b-5gqn4_calico-apiserver(241582b1-4172-41e0-a757-624f1063d729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc8df768b-5gqn4_calico-apiserver(241582b1-4172-41e0-a757-624f1063d729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podUID="241582b1-4172-41e0-a757-624f1063d729" Apr 30 03:34:25.044379 kubelet[2493]: I0430 03:34:25.044337 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:34:25.045170 containerd[1463]: time="2025-04-30T03:34:25.045119607Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" Apr 30 03:34:25.045388 containerd[1463]: time="2025-04-30T03:34:25.045362734Z" level=info msg="Ensure that sandbox 3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af in task-service has been cleanup successfully" Apr 30 03:34:25.045803 kubelet[2493]: I0430 03:34:25.045745 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:34:25.046430 containerd[1463]: time="2025-04-30T03:34:25.046375240Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" Apr 30 03:34:25.046607 containerd[1463]: time="2025-04-30T03:34:25.046552514Z" level=info msg="Ensure that sandbox a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9 in task-service has been cleanup successfully" Apr 30 03:34:25.050787 kubelet[2493]: E0430 03:34:25.050741 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:25.052129 kubelet[2493]: I0430 03:34:25.052108 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:34:25.052651 containerd[1463]: time="2025-04-30T03:34:25.052500959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:34:25.052863 containerd[1463]: time="2025-04-30T03:34:25.052817296Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" Apr 30 03:34:25.053317 containerd[1463]: time="2025-04-30T03:34:25.053287641Z" level=info msg="Ensure that sandbox 81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19 in task-service has been cleanup successfully" Apr 30 03:34:25.054461 kubelet[2493]: I0430 03:34:25.054427 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:34:25.056153 containerd[1463]: time="2025-04-30T03:34:25.056103681Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:34:25.056365 containerd[1463]: time="2025-04-30T03:34:25.056319887Z" level=info msg="Ensure that sandbox 9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666 in task-service has been cleanup successfully" Apr 30 03:34:25.057233 kubelet[2493]: I0430 03:34:25.056809 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:34:25.057837 containerd[1463]: time="2025-04-30T03:34:25.057805755Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:34:25.059786 containerd[1463]: time="2025-04-30T03:34:25.059743812Z" level=info msg="Ensure that sandbox 7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c in task-service has been cleanup successfully" Apr 30 03:34:25.094968 containerd[1463]: time="2025-04-30T03:34:25.093992574Z" level=error msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" failed" error="failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.095162 kubelet[2493]: E0430 03:34:25.094739 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:34:25.095162 kubelet[2493]: E0430 03:34:25.094811 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af"} Apr 30 03:34:25.095162 kubelet[2493]: E0430 03:34:25.094885 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:25.095162 kubelet[2493]: E0430 03:34:25.094916 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podUID="241582b1-4172-41e0-a757-624f1063d729" Apr 30 03:34:25.112640 containerd[1463]: time="2025-04-30T03:34:25.112561996Z" level=error msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" failed" error="failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.112912 kubelet[2493]: E0430 03:34:25.112872 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:34:25.112981 kubelet[2493]: E0430 03:34:25.112926 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9"} Apr 30 03:34:25.112981 kubelet[2493]: E0430 03:34:25.112962 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:25.113068 kubelet[2493]: E0430 03:34:25.112985 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw6pw" podUID="5f3172f0-7cdf-426e-b2bc-b5e5053a3b93" Apr 30 03:34:25.115270 containerd[1463]: time="2025-04-30T03:34:25.115214238Z" level=error msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" failed" error="failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.115423 kubelet[2493]: E0430 03:34:25.115400 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:34:25.115568 kubelet[2493]: E0430 03:34:25.115500 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19"} Apr 30 03:34:25.115568 kubelet[2493]: E0430 03:34:25.115527 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:25.115568 kubelet[2493]: E0430 03:34:25.115548 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7jcld" podUID="cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32" Apr 30 03:34:25.121461 containerd[1463]: time="2025-04-30T03:34:25.121417974Z" level=error msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" failed" error="failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.121578 kubelet[2493]: E0430 03:34:25.121546 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:34:25.121637 kubelet[2493]: E0430 03:34:25.121576 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c"} Apr 30 03:34:25.121637 kubelet[2493]: E0430 03:34:25.121612 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:25.121637 kubelet[2493]: E0430 03:34:25.121631 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:34:25.121785 containerd[1463]: time="2025-04-30T03:34:25.121761270Z" level=error msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" failed" error="failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.121956 kubelet[2493]: E0430 03:34:25.121930 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:34:25.121997 kubelet[2493]: E0430 03:34:25.121959 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666"} Apr 30 03:34:25.121997 kubelet[2493]: E0430 03:34:25.121982 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:25.122069 kubelet[2493]: E0430 03:34:25.121999 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:34:25.705956 systemd[1]: Created slice kubepods-besteffort-pod16f36b79_0754_4e9a_854f_8a255aa4e23b.slice - libcontainer container kubepods-besteffort-pod16f36b79_0754_4e9a_854f_8a255aa4e23b.slice. Apr 30 03:34:25.708334 containerd[1463]: time="2025-04-30T03:34:25.708301617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pk55,Uid:16f36b79-0754-4e9a-854f-8a255aa4e23b,Namespace:calico-system,Attempt:0,}" Apr 30 03:34:25.864545 containerd[1463]: time="2025-04-30T03:34:25.864387660Z" level=error msg="Failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.865001 containerd[1463]: time="2025-04-30T03:34:25.864960016Z" level=error msg="encountered an error cleaning up failed sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.865073 containerd[1463]: time="2025-04-30T03:34:25.865044816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pk55,Uid:16f36b79-0754-4e9a-854f-8a255aa4e23b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.865445 kubelet[2493]: E0430 03:34:25.865357 2493 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:25.865445 kubelet[2493]: E0430 03:34:25.865454 2493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:25.866087 kubelet[2493]: E0430 03:34:25.865476 2493 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pk55" Apr 30 03:34:25.866087 kubelet[2493]: E0430 03:34:25.865529 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pk55_calico-system(16f36b79-0754-4e9a-854f-8a255aa4e23b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pk55_calico-system(16f36b79-0754-4e9a-854f-8a255aa4e23b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:25.867171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8-shm.mount: Deactivated successfully. Apr 30 03:34:26.060782 kubelet[2493]: I0430 03:34:26.060721 2493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:34:26.061423 containerd[1463]: time="2025-04-30T03:34:26.061387977Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" Apr 30 03:34:26.061825 containerd[1463]: time="2025-04-30T03:34:26.061789762Z" level=info msg="Ensure that sandbox 7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8 in task-service has been cleanup successfully" Apr 30 03:34:26.092290 containerd[1463]: time="2025-04-30T03:34:26.092217320Z" level=error msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" failed" error="failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:26.092606 kubelet[2493]: E0430 03:34:26.092542 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:34:26.092673 kubelet[2493]: E0430 03:34:26.092618 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8"} Apr 30 03:34:26.092724 kubelet[2493]: E0430 03:34:26.092664 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:26.092795 kubelet[2493]: E0430 03:34:26.092734 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:28.297900 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:60566.service - OpenSSH per-connection server daemon (10.0.0.1:60566). Apr 30 03:34:28.345047 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 60566 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:28.347235 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:28.359625 systemd-logind[1444]: New session 8 of user core. Apr 30 03:34:28.365881 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:34:28.515098 sshd[3614]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:28.519554 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:34:28.520197 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:60566.service: Deactivated successfully. Apr 30 03:34:28.522051 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:34:28.524249 systemd-logind[1444]: Removed session 8. Apr 30 03:34:30.133382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175997842.mount: Deactivated successfully. Apr 30 03:34:31.370683 containerd[1463]: time="2025-04-30T03:34:31.370554580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:31.394695 containerd[1463]: time="2025-04-30T03:34:31.394506463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:34:31.418784 containerd[1463]: time="2025-04-30T03:34:31.418680031Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:31.421065 containerd[1463]: time="2025-04-30T03:34:31.421031762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:34:31.421634 containerd[1463]: time="2025-04-30T03:34:31.421566018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.369015905s" Apr 30 03:34:31.421691 containerd[1463]: time="2025-04-30T03:34:31.421640307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:34:31.429804 containerd[1463]: time="2025-04-30T03:34:31.429679772Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:34:31.481289 containerd[1463]: time="2025-04-30T03:34:31.481215414Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c\"" Apr 30 03:34:31.481854 containerd[1463]: time="2025-04-30T03:34:31.481825351Z" level=info msg="StartContainer for \"d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c\"" Apr 30 03:34:31.564017 systemd[1]: Started cri-containerd-d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c.scope - libcontainer container d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c. Apr 30 03:34:31.609634 containerd[1463]: time="2025-04-30T03:34:31.609553123Z" level=info msg="StartContainer for \"d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c\" returns successfully" Apr 30 03:34:31.677533 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:34:31.677692 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:34:31.703666 systemd[1]: cri-containerd-d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c.scope: Deactivated successfully. Apr 30 03:34:31.737036 containerd[1463]: time="2025-04-30T03:34:31.736938261Z" level=info msg="shim disconnected" id=d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c namespace=k8s.io Apr 30 03:34:31.737036 containerd[1463]: time="2025-04-30T03:34:31.737022219Z" level=warning msg="cleaning up after shim disconnected" id=d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c namespace=k8s.io Apr 30 03:34:31.737372 containerd[1463]: time="2025-04-30T03:34:31.737033550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:32.079393 kubelet[2493]: I0430 03:34:32.079341 2493 scope.go:117] "RemoveContainer" containerID="d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c" Apr 30 03:34:32.079928 kubelet[2493]: E0430 03:34:32.079431 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:32.082033 containerd[1463]: time="2025-04-30T03:34:32.081981839Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 30 03:34:32.175524 containerd[1463]: time="2025-04-30T03:34:32.175453491Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00\"" Apr 30 03:34:32.176268 containerd[1463]: time="2025-04-30T03:34:32.176073235Z" level=info msg="StartContainer for \"ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00\"" Apr 30 03:34:32.208795 systemd[1]: Started cri-containerd-ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00.scope - libcontainer container ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00. Apr 30 03:34:32.245360 containerd[1463]: time="2025-04-30T03:34:32.245299009Z" level=info msg="StartContainer for \"ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00\" returns successfully" Apr 30 03:34:32.302262 systemd[1]: cri-containerd-ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00.scope: Deactivated successfully. Apr 30 03:34:32.333706 containerd[1463]: time="2025-04-30T03:34:32.333514529Z" level=info msg="shim disconnected" id=ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00 namespace=k8s.io Apr 30 03:34:32.333706 containerd[1463]: time="2025-04-30T03:34:32.333610549Z" level=warning msg="cleaning up after shim disconnected" id=ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00 namespace=k8s.io Apr 30 03:34:32.333706 containerd[1463]: time="2025-04-30T03:34:32.333622803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:32.428423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c-rootfs.mount: Deactivated successfully. Apr 30 03:34:33.083958 kubelet[2493]: I0430 03:34:33.083898 2493 scope.go:117] "RemoveContainer" containerID="d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c" Apr 30 03:34:33.084516 kubelet[2493]: I0430 03:34:33.084291 2493 scope.go:117] "RemoveContainer" containerID="ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00" Apr 30 03:34:33.084516 kubelet[2493]: E0430 03:34:33.084373 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:33.084516 kubelet[2493]: E0430 03:34:33.084462 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-88dsr_calico-system(aa4158f9-ce84-423c-bcfa-632767bccf2c)\"" pod="calico-system/calico-node-88dsr" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" Apr 30 03:34:33.085824 containerd[1463]: time="2025-04-30T03:34:33.085451724Z" level=info msg="RemoveContainer for \"d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c\"" Apr 30 03:34:33.094112 containerd[1463]: time="2025-04-30T03:34:33.094056087Z" level=info msg="RemoveContainer for \"d700df38fc7ad2344d765d7b8907b8f3b2291585bac77c8c2f36f5a16c7f171c\" returns successfully" Apr 30 03:34:33.529216 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:60574.service - OpenSSH per-connection server daemon (10.0.0.1:60574). Apr 30 03:34:33.571230 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 60574 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:33.572928 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:33.577306 systemd-logind[1444]: New session 9 of user core. Apr 30 03:34:33.583717 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:34:33.711703 sshd[3764]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:33.716619 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:60574.service: Deactivated successfully. Apr 30 03:34:33.718650 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:34:33.719629 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:34:33.720655 systemd-logind[1444]: Removed session 9. Apr 30 03:34:36.700940 containerd[1463]: time="2025-04-30T03:34:36.700816548Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:34:36.701437 containerd[1463]: time="2025-04-30T03:34:36.700816528Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" Apr 30 03:34:36.730608 containerd[1463]: time="2025-04-30T03:34:36.730517053Z" level=error msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" failed" error="failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:36.730888 kubelet[2493]: E0430 03:34:36.730823 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:34:36.731250 kubelet[2493]: E0430 03:34:36.730896 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c"} Apr 30 03:34:36.731250 kubelet[2493]: E0430 03:34:36.730946 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:36.731250 kubelet[2493]: E0430 03:34:36.730976 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:34:36.731634 containerd[1463]: time="2025-04-30T03:34:36.731599947Z" level=error msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" failed" error="failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:36.731786 kubelet[2493]: E0430 03:34:36.731747 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:34:36.731832 kubelet[2493]: E0430 03:34:36.731784 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8"} Apr 30 03:34:36.731832 kubelet[2493]: E0430 03:34:36.731811 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:36.731911 kubelet[2493]: E0430 03:34:36.731835 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:37.701267 containerd[1463]: time="2025-04-30T03:34:37.701209773Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" Apr 30 03:34:37.810830 containerd[1463]: time="2025-04-30T03:34:37.810756117Z" level=error msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" failed" error="failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:37.811102 kubelet[2493]: E0430 03:34:37.811054 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:34:37.811448 kubelet[2493]: E0430 03:34:37.811120 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af"} Apr 30 03:34:37.811448 kubelet[2493]: E0430 03:34:37.811160 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:37.811448 kubelet[2493]: E0430 03:34:37.811185 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podUID="241582b1-4172-41e0-a757-624f1063d729" Apr 30 03:34:38.702129 containerd[1463]: time="2025-04-30T03:34:38.701343392Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:34:38.702129 containerd[1463]: time="2025-04-30T03:34:38.701419145Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" Apr 30 03:34:38.702129 containerd[1463]: time="2025-04-30T03:34:38.701963467Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" Apr 30 03:34:38.730726 containerd[1463]: time="2025-04-30T03:34:38.730672467Z" level=error msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" failed" error="failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:38.731152 kubelet[2493]: E0430 03:34:38.730947 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:34:38.731152 kubelet[2493]: E0430 03:34:38.731030 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666"} Apr 30 03:34:38.731152 kubelet[2493]: E0430 03:34:38.731085 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:38.731152 kubelet[2493]: E0430 03:34:38.731116 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:34:38.745017 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Apr 30 03:34:38.746098 containerd[1463]: time="2025-04-30T03:34:38.745733490Z" level=error msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" failed" error="failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:38.746144 kubelet[2493]: E0430 03:34:38.746011 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:34:38.746144 kubelet[2493]: E0430 03:34:38.746070 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19"} Apr 30 03:34:38.746383 kubelet[2493]: E0430 03:34:38.746350 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:38.746518 kubelet[2493]: E0430 03:34:38.746401 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7jcld" podUID="cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32" Apr 30 03:34:38.748193 containerd[1463]: time="2025-04-30T03:34:38.748082271Z" level=error msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" failed" error="failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:38.748286 kubelet[2493]: E0430 03:34:38.748217 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:34:38.748344 kubelet[2493]: E0430 03:34:38.748289 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9"} Apr 30 03:34:38.748344 kubelet[2493]: E0430 03:34:38.748332 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:38.748436 kubelet[2493]: E0430 03:34:38.748357 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw6pw" podUID="5f3172f0-7cdf-426e-b2bc-b5e5053a3b93" Apr 30 03:34:38.778148 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:38.779767 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:38.784235 systemd-logind[1444]: New session 10 of user core. Apr 30 03:34:38.794872 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:34:38.976674 sshd[3919]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:38.981035 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:49240.service: Deactivated successfully. Apr 30 03:34:38.983301 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:34:38.984108 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:34:38.985206 systemd-logind[1444]: Removed session 10. Apr 30 03:34:43.993163 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:49242.service - OpenSSH per-connection server daemon (10.0.0.1:49242). Apr 30 03:34:44.030048 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 49242 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:44.032098 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:44.037161 systemd-logind[1444]: New session 11 of user core. Apr 30 03:34:44.043861 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:34:44.162635 sshd[3938]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:44.167234 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:49242.service: Deactivated successfully. Apr 30 03:34:44.169922 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:34:44.170684 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:34:44.171701 systemd-logind[1444]: Removed session 11. Apr 30 03:34:47.700044 kubelet[2493]: I0430 03:34:47.699993 2493 scope.go:117] "RemoveContainer" containerID="ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00" Apr 30 03:34:47.700644 kubelet[2493]: E0430 03:34:47.700085 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:47.702692 containerd[1463]: time="2025-04-30T03:34:47.702652072Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Apr 30 03:34:48.014132 containerd[1463]: time="2025-04-30T03:34:48.013959832Z" level=info msg="CreateContainer within sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\"" Apr 30 03:34:48.014966 containerd[1463]: time="2025-04-30T03:34:48.014670186Z" level=info msg="StartContainer for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\"" Apr 30 03:34:48.056702 systemd[1]: Started cri-containerd-b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45.scope - libcontainer container b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45. Apr 30 03:34:48.095738 containerd[1463]: time="2025-04-30T03:34:48.095687500Z" level=info msg="StartContainer for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" returns successfully" Apr 30 03:34:48.127018 kubelet[2493]: E0430 03:34:48.125852 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:48.147757 kubelet[2493]: I0430 03:34:48.147686 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-88dsr" podStartSLOduration=18.122272759 podStartE2EDuration="35.147639963s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="2025-04-30 03:34:14.397101611 +0000 UTC m=+13.795990556" lastFinishedPulling="2025-04-30 03:34:31.422468805 +0000 UTC m=+30.821357760" observedRunningTime="2025-04-30 03:34:48.146011597 +0000 UTC m=+47.544900542" watchObservedRunningTime="2025-04-30 03:34:48.147639963 +0000 UTC m=+47.546528909" Apr 30 03:34:48.149867 systemd[1]: cri-containerd-b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45.scope: Deactivated successfully. Apr 30 03:34:48.184544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45-rootfs.mount: Deactivated successfully. Apr 30 03:34:48.189145 containerd[1463]: time="2025-04-30T03:34:48.189064675Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 30 03:34:48.189286 containerd[1463]: time="2025-04-30T03:34:48.189146268Z" level=info msg="shim disconnected" id=b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 namespace=k8s.io Apr 30 03:34:48.189286 containerd[1463]: time="2025-04-30T03:34:48.189162990Z" level=warning msg="cleaning up after shim disconnected" id=b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 namespace=k8s.io Apr 30 03:34:48.189286 containerd[1463]: time="2025-04-30T03:34:48.189172668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:48.189286 containerd[1463]: time="2025-04-30T03:34:48.189076858Z" level=error msg="Failed to delete exec process \"9f106620fe72859526c4e6c01361c268b5eb01bcfbc931d834418eee22edacaa\" for container \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\"" error="ttrpc: closed: unknown" Apr 30 03:34:48.192258 containerd[1463]: time="2025-04-30T03:34:48.192206553Z" level=error msg="ExecSync for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" failed" error="failed to exec in container: failed to start exec \"9f106620fe72859526c4e6c01361c268b5eb01bcfbc931d834418eee22edacaa\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" Apr 30 03:34:48.192509 kubelet[2493]: E0430 03:34:48.192464 2493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"9f106620fe72859526c4e6c01361c268b5eb01bcfbc931d834418eee22edacaa\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:34:48.193549 containerd[1463]: time="2025-04-30T03:34:48.193491435Z" level=error msg="ExecSync for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 not found: not found" Apr 30 03:34:48.193715 kubelet[2493]: E0430 03:34:48.193670 2493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 not found: not found" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:34:48.194441 containerd[1463]: time="2025-04-30T03:34:48.194411001Z" level=error msg="ExecSync for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 not found: not found" Apr 30 03:34:48.194653 kubelet[2493]: E0430 03:34:48.194597 2493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45 not found: not found" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 30 03:34:49.130320 kubelet[2493]: I0430 03:34:49.130266 2493 scope.go:117] "RemoveContainer" containerID="ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00" Apr 30 03:34:49.130933 kubelet[2493]: I0430 03:34:49.130654 2493 scope.go:117] "RemoveContainer" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" Apr 30 03:34:49.130933 kubelet[2493]: E0430 03:34:49.130736 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:49.130933 kubelet[2493]: E0430 03:34:49.130852 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-88dsr_calico-system(aa4158f9-ce84-423c-bcfa-632767bccf2c)\"" pod="calico-system/calico-node-88dsr" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" Apr 30 03:34:49.131868 containerd[1463]: time="2025-04-30T03:34:49.131818753Z" level=info msg="RemoveContainer for \"ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00\"" Apr 30 03:34:49.135792 containerd[1463]: time="2025-04-30T03:34:49.135758568Z" level=info msg="RemoveContainer for \"ecaf911e662372728dd95b9f761bd3424c1035e0e51d73a2d71dd8e1fc6a2a00\" returns successfully" Apr 30 03:34:49.179079 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:42744.service - OpenSSH per-connection server daemon (10.0.0.1:42744). Apr 30 03:34:49.213512 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 42744 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:49.215734 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:49.220470 systemd-logind[1444]: New session 12 of user core. Apr 30 03:34:49.229730 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:34:49.403377 sshd[4024]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:49.412841 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:42744.service: Deactivated successfully. Apr 30 03:34:49.414937 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:34:49.416688 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:34:49.426035 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Apr 30 03:34:49.427011 systemd-logind[1444]: Removed session 12. Apr 30 03:34:49.456976 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:49.458763 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:49.463466 systemd-logind[1444]: New session 13 of user core. Apr 30 03:34:49.474754 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:34:49.669907 sshd[4039]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:49.677713 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:42746.service: Deactivated successfully. Apr 30 03:34:49.679799 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:34:49.681270 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:34:49.687042 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:42752.service - OpenSSH per-connection server daemon (10.0.0.1:42752). Apr 30 03:34:49.688498 systemd-logind[1444]: Removed session 13. Apr 30 03:34:49.701138 containerd[1463]: time="2025-04-30T03:34:49.701059679Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:34:49.702038 containerd[1463]: time="2025-04-30T03:34:49.701059709Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" Apr 30 03:34:49.721766 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 42752 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:49.724764 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:49.730793 systemd-logind[1444]: New session 14 of user core. Apr 30 03:34:49.731349 containerd[1463]: time="2025-04-30T03:34:49.731302386Z" level=error msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" failed" error="failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:49.731707 kubelet[2493]: E0430 03:34:49.731633 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:34:49.731814 kubelet[2493]: E0430 03:34:49.731709 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c"} Apr 30 03:34:49.731814 kubelet[2493]: E0430 03:34:49.731760 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:49.731933 kubelet[2493]: E0430 03:34:49.731843 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:34:49.734690 containerd[1463]: time="2025-04-30T03:34:49.734637397Z" level=error msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" failed" error="failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:49.734873 kubelet[2493]: E0430 03:34:49.734819 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:34:49.734950 kubelet[2493]: E0430 03:34:49.734875 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8"} Apr 30 03:34:49.734950 kubelet[2493]: E0430 03:34:49.734916 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:49.735058 kubelet[2493]: E0430 03:34:49.734944 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:34:49.738737 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:34:49.876828 sshd[4052]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:49.880799 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:42752.service: Deactivated successfully. Apr 30 03:34:49.883006 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:34:49.883704 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:34:49.884596 systemd-logind[1444]: Removed session 14. Apr 30 03:34:50.138741 kubelet[2493]: I0430 03:34:50.138698 2493 scope.go:117] "RemoveContainer" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" Apr 30 03:34:50.139266 kubelet[2493]: E0430 03:34:50.138797 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:34:50.139266 kubelet[2493]: E0430 03:34:50.138906 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-88dsr_calico-system(aa4158f9-ce84-423c-bcfa-632767bccf2c)\"" pod="calico-system/calico-node-88dsr" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" Apr 30 03:34:50.700912 containerd[1463]: time="2025-04-30T03:34:50.700869060Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:34:50.728377 containerd[1463]: time="2025-04-30T03:34:50.728320636Z" level=error msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" failed" error="failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:50.728660 kubelet[2493]: E0430 03:34:50.728606 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:34:50.728711 kubelet[2493]: E0430 03:34:50.728675 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666"} Apr 30 03:34:50.728737 kubelet[2493]: E0430 03:34:50.728715 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:50.728807 kubelet[2493]: E0430 03:34:50.728745 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:34:51.700679 containerd[1463]: time="2025-04-30T03:34:51.700619199Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" Apr 30 03:34:51.700955 containerd[1463]: time="2025-04-30T03:34:51.700619419Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" Apr 30 03:34:51.728557 containerd[1463]: time="2025-04-30T03:34:51.728406403Z" level=error msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" failed" error="failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:51.728843 kubelet[2493]: E0430 03:34:51.728746 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:34:51.729245 kubelet[2493]: E0430 03:34:51.728866 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19"} Apr 30 03:34:51.729245 kubelet[2493]: E0430 03:34:51.728910 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:51.729245 kubelet[2493]: E0430 03:34:51.728951 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7jcld" podUID="cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32" Apr 30 03:34:51.729924 containerd[1463]: time="2025-04-30T03:34:51.729836075Z" level=error msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" failed" error="failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:51.730230 kubelet[2493]: E0430 03:34:51.730160 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:34:51.730230 kubelet[2493]: E0430 03:34:51.730227 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af"} Apr 30 03:34:51.730434 kubelet[2493]: E0430 03:34:51.730284 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:51.730434 kubelet[2493]: E0430 03:34:51.730314 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podUID="241582b1-4172-41e0-a757-624f1063d729" Apr 30 03:34:52.700878 containerd[1463]: time="2025-04-30T03:34:52.700577469Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" Apr 30 03:34:52.730032 containerd[1463]: time="2025-04-30T03:34:52.729970475Z" level=error msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" failed" error="failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:34:52.730481 kubelet[2493]: E0430 03:34:52.730254 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:34:52.730481 kubelet[2493]: E0430 03:34:52.730326 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9"} Apr 30 03:34:52.730481 kubelet[2493]: E0430 03:34:52.730372 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:34:52.730481 kubelet[2493]: E0430 03:34:52.730407 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw6pw" podUID="5f3172f0-7cdf-426e-b2bc-b5e5053a3b93" Apr 30 03:34:54.893169 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:42754.service - OpenSSH per-connection server daemon (10.0.0.1:42754). Apr 30 03:34:54.926453 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 42754 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:34:54.928333 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:54.932374 systemd-logind[1444]: New session 15 of user core. Apr 30 03:34:54.948715 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:34:55.087839 sshd[4207]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:55.092020 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:42754.service: Deactivated successfully. Apr 30 03:34:55.094203 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:34:55.095008 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:34:55.095905 systemd-logind[1444]: Removed session 15. Apr 30 03:35:00.108282 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:43080.service - OpenSSH per-connection server daemon (10.0.0.1:43080). Apr 30 03:35:00.145207 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 43080 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:00.147338 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:00.152429 systemd-logind[1444]: New session 16 of user core. Apr 30 03:35:00.158938 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:35:00.327608 sshd[4221]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:00.330787 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:43080.service: Deactivated successfully. Apr 30 03:35:00.333414 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:35:00.335183 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:35:00.336428 systemd-logind[1444]: Removed session 16. Apr 30 03:35:01.701281 containerd[1463]: time="2025-04-30T03:35:01.701180425Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:35:01.743438 containerd[1463]: time="2025-04-30T03:35:01.743353232Z" level=error msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" failed" error="failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:01.743701 kubelet[2493]: E0430 03:35:01.743651 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:01.744025 kubelet[2493]: E0430 03:35:01.743711 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666"} Apr 30 03:35:01.744025 kubelet[2493]: E0430 03:35:01.743746 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:01.744025 kubelet[2493]: E0430 03:35:01.743767 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:35:02.700212 containerd[1463]: time="2025-04-30T03:35:02.700151960Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:35:02.727187 containerd[1463]: time="2025-04-30T03:35:02.727127811Z" level=error msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" failed" error="failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:02.727612 kubelet[2493]: E0430 03:35:02.727359 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:02.727612 kubelet[2493]: E0430 03:35:02.727420 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c"} Apr 30 03:35:02.727612 kubelet[2493]: E0430 03:35:02.727457 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:02.727612 kubelet[2493]: E0430 03:35:02.727481 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:35:04.700853 kubelet[2493]: I0430 03:35:04.700627 2493 scope.go:117] "RemoveContainer" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" Apr 30 03:35:04.700853 kubelet[2493]: E0430 03:35:04.700715 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:04.700853 kubelet[2493]: E0430 03:35:04.700807 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-88dsr_calico-system(aa4158f9-ce84-423c-bcfa-632767bccf2c)\"" pod="calico-system/calico-node-88dsr" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" Apr 30 03:35:04.701553 containerd[1463]: time="2025-04-30T03:35:04.701157471Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" Apr 30 03:35:04.701553 containerd[1463]: time="2025-04-30T03:35:04.701523335Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" Apr 30 03:35:04.728542 containerd[1463]: time="2025-04-30T03:35:04.728478308Z" level=error msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" failed" error="failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:04.728863 kubelet[2493]: E0430 03:35:04.728792 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:35:04.728952 kubelet[2493]: E0430 03:35:04.728878 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8"} Apr 30 03:35:04.728952 kubelet[2493]: E0430 03:35:04.728922 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:04.729050 kubelet[2493]: E0430 03:35:04.728955 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16f36b79-0754-4e9a-854f-8a255aa4e23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pk55" podUID="16f36b79-0754-4e9a-854f-8a255aa4e23b" Apr 30 03:35:04.739335 containerd[1463]: time="2025-04-30T03:35:04.739289638Z" level=error msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" failed" error="failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:04.739510 kubelet[2493]: E0430 03:35:04.739482 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:35:04.739557 kubelet[2493]: E0430 03:35:04.739513 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19"} Apr 30 03:35:04.739557 kubelet[2493]: E0430 03:35:04.739535 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:04.739557 kubelet[2493]: E0430 03:35:04.739552 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7jcld" podUID="cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32" Apr 30 03:35:05.339912 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:43094.service - OpenSSH per-connection server daemon (10.0.0.1:43094). Apr 30 03:35:05.404523 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 43094 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:05.406610 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:05.411460 systemd-logind[1444]: New session 17 of user core. Apr 30 03:35:05.426803 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:35:05.547380 sshd[4329]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:05.551483 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:43094.service: Deactivated successfully. Apr 30 03:35:05.554320 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:35:05.555236 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:35:05.556272 systemd-logind[1444]: Removed session 17. Apr 30 03:35:05.701167 containerd[1463]: time="2025-04-30T03:35:05.700693700Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" Apr 30 03:35:05.701167 containerd[1463]: time="2025-04-30T03:35:05.700763594Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" Apr 30 03:35:05.728970 containerd[1463]: time="2025-04-30T03:35:05.728891497Z" level=error msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" failed" error="failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:05.729544 kubelet[2493]: E0430 03:35:05.729172 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:35:05.729544 kubelet[2493]: E0430 03:35:05.729246 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9"} Apr 30 03:35:05.729544 kubelet[2493]: E0430 03:35:05.729297 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:05.729544 kubelet[2493]: E0430 03:35:05.729331 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-sw6pw" podUID="5f3172f0-7cdf-426e-b2bc-b5e5053a3b93" Apr 30 03:35:05.733601 containerd[1463]: time="2025-04-30T03:35:05.733532965Z" level=error msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" failed" error="failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:05.733833 kubelet[2493]: E0430 03:35:05.733786 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:35:05.733876 kubelet[2493]: E0430 03:35:05.733838 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af"} Apr 30 03:35:05.733900 kubelet[2493]: E0430 03:35:05.733875 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:05.733949 kubelet[2493]: E0430 03:35:05.733903 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241582b1-4172-41e0-a757-624f1063d729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podUID="241582b1-4172-41e0-a757-624f1063d729" Apr 30 03:35:10.560742 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:54438.service - OpenSSH per-connection server daemon (10.0.0.1:54438). Apr 30 03:35:10.602441 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 54438 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:10.604910 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:10.610626 systemd-logind[1444]: New session 18 of user core. Apr 30 03:35:10.623963 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:35:10.758969 sshd[4392]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:10.763912 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:54438.service: Deactivated successfully. Apr 30 03:35:10.766662 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:35:10.767607 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:35:10.768754 systemd-logind[1444]: Removed session 18. Apr 30 03:35:12.700811 containerd[1463]: time="2025-04-30T03:35:12.700750966Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:35:12.726472 containerd[1463]: time="2025-04-30T03:35:12.726350623Z" level=error msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" failed" error="failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:12.726710 kubelet[2493]: E0430 03:35:12.726649 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:12.727110 kubelet[2493]: E0430 03:35:12.726721 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666"} Apr 30 03:35:12.727110 kubelet[2493]: E0430 03:35:12.726761 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:12.727110 kubelet[2493]: E0430 03:35:12.726784 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podUID="0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281" Apr 30 03:35:13.701015 containerd[1463]: time="2025-04-30T03:35:13.700939948Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:35:13.729873 containerd[1463]: time="2025-04-30T03:35:13.729811030Z" level=error msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" failed" error="failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:35:13.730099 kubelet[2493]: E0430 03:35:13.730052 2493 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:13.730451 kubelet[2493]: E0430 03:35:13.730111 2493 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c"} Apr 30 03:35:13.730451 kubelet[2493]: E0430 03:35:13.730148 2493 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:35:13.730451 kubelet[2493]: E0430 03:35:13.730174 2493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ddbf64df-5b81-40d9-b056-7dac1c53f65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podUID="ddbf64df-5b81-40d9-b056-7dac1c53f65d" Apr 30 03:35:14.075762 containerd[1463]: time="2025-04-30T03:35:14.075312166Z" level=info msg="StopPodSandbox for \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\"" Apr 30 03:35:14.083748 containerd[1463]: time="2025-04-30T03:35:14.083610111Z" level=info msg="Container to stop \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:35:14.083748 containerd[1463]: time="2025-04-30T03:35:14.083672159Z" level=info msg="Container to stop \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:35:14.083748 containerd[1463]: time="2025-04-30T03:35:14.083682980Z" level=info msg="Container to stop \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:35:14.087835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4-shm.mount: Deactivated successfully. Apr 30 03:35:14.094199 systemd[1]: cri-containerd-e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4.scope: Deactivated successfully. Apr 30 03:35:14.114778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4-rootfs.mount: Deactivated successfully. Apr 30 03:35:14.259980 containerd[1463]: time="2025-04-30T03:35:14.259834602Z" level=info msg="shim disconnected" id=e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4 namespace=k8s.io Apr 30 03:35:14.259980 containerd[1463]: time="2025-04-30T03:35:14.259888786Z" level=warning msg="cleaning up after shim disconnected" id=e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4 namespace=k8s.io Apr 30 03:35:14.259980 containerd[1463]: time="2025-04-30T03:35:14.259899456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:35:14.275652 containerd[1463]: time="2025-04-30T03:35:14.275604932Z" level=info msg="TearDown network for sandbox \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" successfully" Apr 30 03:35:14.275652 containerd[1463]: time="2025-04-30T03:35:14.275647714Z" level=info msg="StopPodSandbox for \"e4f3ad5d1193ff9d63fc16a6f690419b1a74a214431e53bb8392c7b58c77c8c4\" returns successfully" Apr 30 03:35:14.312007 kubelet[2493]: E0430 03:35:14.311936 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="install-cni" Apr 30 03:35:14.312007 kubelet[2493]: E0430 03:35:14.311968 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="flexvol-driver" Apr 30 03:35:14.312007 kubelet[2493]: E0430 03:35:14.311976 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.312007 kubelet[2493]: E0430 03:35:14.311983 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.312007 kubelet[2493]: I0430 03:35:14.312009 2493 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.312007 kubelet[2493]: I0430 03:35:14.312015 2493 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.312007 kubelet[2493]: E0430 03:35:14.312034 2493 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.312478 kubelet[2493]: I0430 03:35:14.312051 2493 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" containerName="calico-node" Apr 30 03:35:14.318810 systemd[1]: Created slice kubepods-besteffort-pod43db2929_2a4d_4bca_8872_294c9612c61d.slice - libcontainer container kubepods-besteffort-pod43db2929_2a4d_4bca_8872_294c9612c61d.slice. Apr 30 03:35:14.406420 kubelet[2493]: I0430 03:35:14.406289 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htntj\" (UniqueName: \"kubernetes.io/projected/aa4158f9-ce84-423c-bcfa-632767bccf2c-kube-api-access-htntj\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406709 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4158f9-ce84-423c-bcfa-632767bccf2c-tigera-ca-bundle\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406814 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-run-calico\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406834 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-net-dir\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406847 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-bin-dir\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406862 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-flexvol-driver-host\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407024 kubelet[2493]: I0430 03:35:14.406876 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-xtables-lock\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406888 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-lib-calico\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406903 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-log-dir\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406918 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-lib-modules\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406937 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa4158f9-ce84-423c-bcfa-632767bccf2c-node-certs\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406948 2493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-policysync\") pod \"aa4158f9-ce84-423c-bcfa-632767bccf2c\" (UID: \"aa4158f9-ce84-423c-bcfa-632767bccf2c\") " Apr 30 03:35:14.407200 kubelet[2493]: I0430 03:35:14.406978 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtj4t\" (UniqueName: \"kubernetes.io/projected/43db2929-2a4d-4bca-8872-294c9612c61d-kube-api-access-mtj4t\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407383 kubelet[2493]: I0430 03:35:14.406997 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-cni-net-dir\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407383 kubelet[2493]: I0430 03:35:14.407012 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-flexvol-driver-host\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407383 kubelet[2493]: I0430 03:35:14.407027 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43db2929-2a4d-4bca-8872-294c9612c61d-tigera-ca-bundle\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407383 kubelet[2493]: I0430 03:35:14.407042 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43db2929-2a4d-4bca-8872-294c9612c61d-node-certs\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407383 kubelet[2493]: I0430 03:35:14.407057 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-var-run-calico\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407536 kubelet[2493]: I0430 03:35:14.407071 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-var-lib-calico\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407536 kubelet[2493]: I0430 03:35:14.407085 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-lib-modules\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407536 kubelet[2493]: I0430 03:35:14.407099 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-cni-bin-dir\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407536 kubelet[2493]: I0430 03:35:14.407117 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-cni-log-dir\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.407536 kubelet[2493]: I0430 03:35:14.407131 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-xtables-lock\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.408974 kubelet[2493]: I0430 03:35:14.407146 2493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43db2929-2a4d-4bca-8872-294c9612c61d-policysync\") pod \"calico-node-gjxkw\" (UID: \"43db2929-2a4d-4bca-8872-294c9612c61d\") " pod="calico-system/calico-node-gjxkw" Apr 30 03:35:14.408974 kubelet[2493]: I0430 03:35:14.407202 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-policysync" (OuterVolumeSpecName: "policysync") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.408974 kubelet[2493]: I0430 03:35:14.407318 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.408974 kubelet[2493]: I0430 03:35:14.407345 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.408974 kubelet[2493]: I0430 03:35:14.407365 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.409108 kubelet[2493]: I0430 03:35:14.407478 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.409108 kubelet[2493]: I0430 03:35:14.407511 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.409108 kubelet[2493]: I0430 03:35:14.407534 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.409108 kubelet[2493]: I0430 03:35:14.407559 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.409108 kubelet[2493]: I0430 03:35:14.407650 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:35:14.410860 kubelet[2493]: I0430 03:35:14.410816 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4158f9-ce84-423c-bcfa-632767bccf2c-node-certs" (OuterVolumeSpecName: "node-certs") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:35:14.412681 systemd[1]: var-lib-kubelet-pods-aa4158f9\x2dce84\x2d423c\x2dbcfa\x2d632767bccf2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtntj.mount: Deactivated successfully. Apr 30 03:35:14.412810 systemd[1]: var-lib-kubelet-pods-aa4158f9\x2dce84\x2d423c\x2dbcfa\x2d632767bccf2c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Apr 30 03:35:14.414220 kubelet[2493]: I0430 03:35:14.414187 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4158f9-ce84-423c-bcfa-632767bccf2c-kube-api-access-htntj" (OuterVolumeSpecName: "kube-api-access-htntj") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "kube-api-access-htntj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:35:14.414910 kubelet[2493]: I0430 03:35:14.414891 2493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4158f9-ce84-423c-bcfa-632767bccf2c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "aa4158f9-ce84-423c-bcfa-632767bccf2c" (UID: "aa4158f9-ce84-423c-bcfa-632767bccf2c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:35:14.416150 systemd[1]: var-lib-kubelet-pods-aa4158f9\x2dce84\x2d423c\x2dbcfa\x2d632767bccf2c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Apr 30 03:35:14.507939 kubelet[2493]: I0430 03:35:14.507854 2493 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508099 2493 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508120 2493 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508129 2493 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508139 2493 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508149 2493 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-policysync\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508158 2493 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508168 2493 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-var-run-calico\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.508859 kubelet[2493]: I0430 03:35:14.508177 2493 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa4158f9-ce84-423c-bcfa-632767bccf2c-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.509143 kubelet[2493]: I0430 03:35:14.508188 2493 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa4158f9-ce84-423c-bcfa-632767bccf2c-node-certs\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.509143 kubelet[2493]: I0430 03:35:14.508222 2493 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4158f9-ce84-423c-bcfa-632767bccf2c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.509143 kubelet[2493]: I0430 03:35:14.508232 2493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-htntj\" (UniqueName: \"kubernetes.io/projected/aa4158f9-ce84-423c-bcfa-632767bccf2c-kube-api-access-htntj\") on node \"localhost\" DevicePath \"\"" Apr 30 03:35:14.623930 kubelet[2493]: E0430 03:35:14.623871 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:14.624532 containerd[1463]: time="2025-04-30T03:35:14.624483305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjxkw,Uid:43db2929-2a4d-4bca-8872-294c9612c61d,Namespace:calico-system,Attempt:0,}" Apr 30 03:35:14.707898 systemd[1]: Removed slice kubepods-besteffort-podaa4158f9_ce84_423c_bcfa_632767bccf2c.slice - libcontainer container kubepods-besteffort-podaa4158f9_ce84_423c_bcfa_632767bccf2c.slice. Apr 30 03:35:14.827361 containerd[1463]: time="2025-04-30T03:35:14.827187443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:14.827361 containerd[1463]: time="2025-04-30T03:35:14.827286112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:14.827361 containerd[1463]: time="2025-04-30T03:35:14.827299377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:14.827920 containerd[1463]: time="2025-04-30T03:35:14.827395612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:14.847711 systemd[1]: Started cri-containerd-e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1.scope - libcontainer container e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1. Apr 30 03:35:14.872223 containerd[1463]: time="2025-04-30T03:35:14.872167113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjxkw,Uid:43db2929-2a4d-4bca-8872-294c9612c61d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\"" Apr 30 03:35:14.873022 kubelet[2493]: E0430 03:35:14.872984 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:14.875061 containerd[1463]: time="2025-04-30T03:35:14.874945639Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:35:15.189701 kubelet[2493]: I0430 03:35:15.189665 2493 scope.go:117] "RemoveContainer" containerID="b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45" Apr 30 03:35:15.191741 containerd[1463]: time="2025-04-30T03:35:15.191706650Z" level=info msg="RemoveContainer for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\"" Apr 30 03:35:15.220169 containerd[1463]: time="2025-04-30T03:35:15.220121222Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e\"" Apr 30 03:35:15.221796 containerd[1463]: time="2025-04-30T03:35:15.220791623Z" level=info msg="StartContainer for \"ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e\"" Apr 30 03:35:15.252729 systemd[1]: Started cri-containerd-ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e.scope - libcontainer container ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e. Apr 30 03:35:15.386821 containerd[1463]: time="2025-04-30T03:35:15.386742378Z" level=info msg="StartContainer for \"ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e\" returns successfully" Apr 30 03:35:15.415969 containerd[1463]: time="2025-04-30T03:35:15.415911792Z" level=info msg="RemoveContainer for \"b04917ac125608b9a8276590fd6eb85e2575ed0b89158f3aad0ff1936e499d45\" returns successfully" Apr 30 03:35:15.416223 kubelet[2493]: I0430 03:35:15.416179 2493 scope.go:117] "RemoveContainer" containerID="4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2" Apr 30 03:35:15.417556 containerd[1463]: time="2025-04-30T03:35:15.417517011Z" level=info msg="RemoveContainer for \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\"" Apr 30 03:35:15.487971 systemd[1]: cri-containerd-ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e.scope: Deactivated successfully. Apr 30 03:35:15.510723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e-rootfs.mount: Deactivated successfully. Apr 30 03:35:15.677799 containerd[1463]: time="2025-04-30T03:35:15.677739973Z" level=info msg="RemoveContainer for \"4fa2f1364996f2124c142a415640524de68525bf6693bf5e4c54035003e455c2\" returns successfully" Apr 30 03:35:15.678120 kubelet[2493]: I0430 03:35:15.678077 2493 scope.go:117] "RemoveContainer" containerID="c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa" Apr 30 03:35:15.679415 containerd[1463]: time="2025-04-30T03:35:15.679365621Z" level=info msg="RemoveContainer for \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\"" Apr 30 03:35:15.772095 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:54446.service - OpenSSH per-connection server daemon (10.0.0.1:54446). Apr 30 03:35:15.838667 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 54446 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:15.840325 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:15.844271 systemd-logind[1444]: New session 19 of user core. Apr 30 03:35:15.855762 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:35:16.038210 containerd[1463]: time="2025-04-30T03:35:16.037993267Z" level=info msg="shim disconnected" id=ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e namespace=k8s.io Apr 30 03:35:16.038210 containerd[1463]: time="2025-04-30T03:35:16.038050516Z" level=warning msg="cleaning up after shim disconnected" id=ddc6e5155a831b43a66e2e3378f4042ef3eac55b577fc4f95435ab37d26ed82e namespace=k8s.io Apr 30 03:35:16.038210 containerd[1463]: time="2025-04-30T03:35:16.038058933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:35:16.039215 sshd[4581]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:16.043300 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:54446.service: Deactivated successfully. Apr 30 03:35:16.045424 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:35:16.046335 containerd[1463]: time="2025-04-30T03:35:16.046284073Z" level=info msg="RemoveContainer for \"c9f9996af344c6de113a83db3d757e76934053c249912b56a0d0043d8aef6baa\" returns successfully" Apr 30 03:35:16.047353 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:35:16.048483 systemd-logind[1444]: Removed session 19. Apr 30 03:35:16.195095 kubelet[2493]: E0430 03:35:16.194928 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:16.196451 containerd[1463]: time="2025-04-30T03:35:16.196414253Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:35:16.215266 containerd[1463]: time="2025-04-30T03:35:16.215214563Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd\"" Apr 30 03:35:16.216437 containerd[1463]: time="2025-04-30T03:35:16.216393696Z" level=info msg="StartContainer for \"f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd\"" Apr 30 03:35:16.248725 systemd[1]: Started cri-containerd-f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd.scope - libcontainer container f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd. Apr 30 03:35:16.279920 containerd[1463]: time="2025-04-30T03:35:16.279873967Z" level=info msg="StartContainer for \"f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd\" returns successfully" Apr 30 03:35:16.703340 kubelet[2493]: I0430 03:35:16.703278 2493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4158f9-ce84-423c-bcfa-632767bccf2c" path="/var/lib/kubelet/pods/aa4158f9-ce84-423c-bcfa-632767bccf2c/volumes" Apr 30 03:35:16.801822 systemd[1]: cri-containerd-f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd.scope: Deactivated successfully. Apr 30 03:35:16.822790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd-rootfs.mount: Deactivated successfully. Apr 30 03:35:16.967685 containerd[1463]: time="2025-04-30T03:35:16.967482347Z" level=info msg="shim disconnected" id=f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd namespace=k8s.io Apr 30 03:35:16.967685 containerd[1463]: time="2025-04-30T03:35:16.967549244Z" level=warning msg="cleaning up after shim disconnected" id=f30b1a5dca685df46015eebdd23f938967e341bab4984970156523e2ac5248dd namespace=k8s.io Apr 30 03:35:16.967685 containerd[1463]: time="2025-04-30T03:35:16.967558082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:35:17.199706 kubelet[2493]: E0430 03:35:17.199672 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:17.208008 containerd[1463]: time="2025-04-30T03:35:17.207966348Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:35:17.224838 containerd[1463]: time="2025-04-30T03:35:17.224728447Z" level=info msg="CreateContainer within sandbox \"e2f1bc163a0f762ea74a2bddf4f4fe236e5af4fcfabb0bb706052b015cc59df1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df\"" Apr 30 03:35:17.225395 containerd[1463]: time="2025-04-30T03:35:17.225347890Z" level=info msg="StartContainer for \"22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df\"" Apr 30 03:35:17.256717 systemd[1]: Started cri-containerd-22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df.scope - libcontainer container 22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df. Apr 30 03:35:17.288118 containerd[1463]: time="2025-04-30T03:35:17.288061006Z" level=info msg="StartContainer for \"22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df\" returns successfully" Apr 30 03:35:17.701220 containerd[1463]: time="2025-04-30T03:35:17.701150039Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\"" Apr 30 03:35:17.701392 containerd[1463]: time="2025-04-30T03:35:17.701338339Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\"" Apr 30 03:35:17.701692 containerd[1463]: time="2025-04-30T03:35:17.701660665Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\"" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.762 [INFO][4784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.763 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" iface="eth0" netns="/var/run/netns/cni-ad1b0d64-02d7-fc89-1244-2b6cc6744f97" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" iface="eth0" netns="/var/run/netns/cni-ad1b0d64-02d7-fc89-1244-2b6cc6744f97" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" iface="eth0" netns="/var/run/netns/cni-ad1b0d64-02d7-fc89-1244-2b6cc6744f97" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.787 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" HandleID="k8s-pod-network.3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.787 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.787 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.794 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" HandleID="k8s-pod-network.3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.794 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" HandleID="k8s-pod-network.3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.796 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:17.801111 containerd[1463]: 2025-04-30 03:35:17.799 [INFO][4784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af" Apr 30 03:35:17.801555 containerd[1463]: time="2025-04-30T03:35:17.801269911Z" level=info msg="TearDown network for sandbox \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" successfully" Apr 30 03:35:17.801555 containerd[1463]: time="2025-04-30T03:35:17.801296913Z" level=info msg="StopPodSandbox for \"3f89fdd8e88f2190abf8d9123a08480c1266a1dfa3db01e92bf8984cf02739af\" returns successfully" Apr 30 03:35:17.802073 containerd[1463]: time="2025-04-30T03:35:17.802047977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-5gqn4,Uid:241582b1-4172-41e0-a757-624f1063d729,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.762 [INFO][4782] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.763 [INFO][4782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" iface="eth0" netns="/var/run/netns/cni-4e7f95fa-2679-551d-8a45-c913210bb8b0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.763 [INFO][4782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" iface="eth0" netns="/var/run/netns/cni-4e7f95fa-2679-551d-8a45-c913210bb8b0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" iface="eth0" netns="/var/run/netns/cni-4e7f95fa-2679-551d-8a45-c913210bb8b0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4782] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.791 [INFO][4806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" HandleID="k8s-pod-network.7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.791 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.796 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.801 [WARNING][4806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" HandleID="k8s-pod-network.7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.801 [INFO][4806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" HandleID="k8s-pod-network.7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.803 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:17.807687 containerd[1463]: 2025-04-30 03:35:17.805 [INFO][4782] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8" Apr 30 03:35:17.808438 containerd[1463]: time="2025-04-30T03:35:17.807930394Z" level=info msg="TearDown network for sandbox \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" successfully" Apr 30 03:35:17.808438 containerd[1463]: time="2025-04-30T03:35:17.807959239Z" level=info msg="StopPodSandbox for \"7ca45c4c48dfa89e15899e3d628426ed39da18a0038ef4980e2febeb8c26eae8\" returns successfully" Apr 30 03:35:17.808896 containerd[1463]: time="2025-04-30T03:35:17.808864638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pk55,Uid:16f36b79-0754-4e9a-854f-8a255aa4e23b,Namespace:calico-system,Attempt:1,}" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.763 [INFO][4783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.764 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" iface="eth0" netns="/var/run/netns/cni-db48a592-b51a-ba6d-111f-c787057d9572" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.765 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" iface="eth0" netns="/var/run/netns/cni-db48a592-b51a-ba6d-111f-c787057d9572" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.765 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" iface="eth0" netns="/var/run/netns/cni-db48a592-b51a-ba6d-111f-c787057d9572" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.765 [INFO][4783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.765 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.795 [INFO][4809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" HandleID="k8s-pod-network.81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.796 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.803 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.808 [WARNING][4809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" HandleID="k8s-pod-network.81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.808 [INFO][4809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" HandleID="k8s-pod-network.81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.809 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:17.814284 containerd[1463]: 2025-04-30 03:35:17.811 [INFO][4783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19" Apr 30 03:35:17.814701 containerd[1463]: time="2025-04-30T03:35:17.814466239Z" level=info msg="TearDown network for sandbox \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" successfully" Apr 30 03:35:17.814701 containerd[1463]: time="2025-04-30T03:35:17.814491457Z" level=info msg="StopPodSandbox for \"81bd285956304a0a4ae78a456d5808190565179e86d89b215dae5b8e21a91b19\" returns successfully" Apr 30 03:35:17.814897 kubelet[2493]: E0430 03:35:17.814869 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:17.815243 containerd[1463]: time="2025-04-30T03:35:17.815214248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7jcld,Uid:cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32,Namespace:kube-system,Attempt:1,}" Apr 30 03:35:17.945332 systemd-networkd[1393]: cali2b880680f34: Link UP Apr 30 03:35:17.945575 systemd-networkd[1393]: cali2b880680f34: Gained carrier Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.845 [INFO][4829] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.857 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0 calico-apiserver-6fc8df768b- calico-apiserver 241582b1-4172-41e0-a757-624f1063d729 1055 0 2025-04-30 03:34:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc8df768b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fc8df768b-5gqn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b880680f34 [] []}} ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.857 [INFO][4829] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.892 [INFO][4868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" HandleID="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.905 [INFO][4868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" HandleID="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fc8df768b-5gqn4", "timestamp":"2025-04-30 03:35:17.892401613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.905 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.905 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.905 [INFO][4868] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.908 [INFO][4868] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.912 [INFO][4868] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.919 [INFO][4868] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.921 [INFO][4868] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.923 [INFO][4868] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.923 [INFO][4868] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.924 [INFO][4868] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493 Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.928 [INFO][4868] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.934 [INFO][4868] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.934 [INFO][4868] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" host="localhost" Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.934 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:17.962890 containerd[1463]: 2025-04-30 03:35:17.934 [INFO][4868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" HandleID="k8s-pod-network.f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Workload="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.937 [INFO][4829] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0", GenerateName:"calico-apiserver-6fc8df768b-", Namespace:"calico-apiserver", SelfLink:"", UID:"241582b1-4172-41e0-a757-624f1063d729", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc8df768b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fc8df768b-5gqn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b880680f34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.937 [INFO][4829] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.937 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b880680f34 ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.945 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.946 [INFO][4829] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0", GenerateName:"calico-apiserver-6fc8df768b-", Namespace:"calico-apiserver", SelfLink:"", UID:"241582b1-4172-41e0-a757-624f1063d729", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc8df768b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493", Pod:"calico-apiserver-6fc8df768b-5gqn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b880680f34", MAC:"52:28:21:95:0d:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:17.964746 containerd[1463]: 2025-04-30 03:35:17.958 [INFO][4829] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-5gqn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--5gqn4-eth0" Apr 30 03:35:17.997714 containerd[1463]: time="2025-04-30T03:35:17.997287889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:17.997714 containerd[1463]: time="2025-04-30T03:35:17.997378802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:17.997714 containerd[1463]: time="2025-04-30T03:35:17.997407297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:17.997714 containerd[1463]: time="2025-04-30T03:35:17.997544870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:18.034057 systemd[1]: Started cri-containerd-f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493.scope - libcontainer container f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493. Apr 30 03:35:18.045158 systemd-networkd[1393]: calia81da8db88d: Link UP Apr 30 03:35:18.045848 systemd-networkd[1393]: calia81da8db88d: Gained carrier Apr 30 03:35:18.055097 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.854 [INFO][4842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.867 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7pk55-eth0 csi-node-driver- calico-system 16f36b79-0754-4e9a-854f-8a255aa4e23b 1056 0 2025-04-30 03:34:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7pk55 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia81da8db88d [] []}} ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.867 [INFO][4842] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.907 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" HandleID="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.918 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" HandleID="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000434830), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7pk55", "timestamp":"2025-04-30 03:35:17.907153282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.918 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.935 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:17.935 [INFO][4876] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.008 [INFO][4876] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.013 [INFO][4876] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.022 [INFO][4876] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.023 [INFO][4876] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.025 [INFO][4876] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.025 [INFO][4876] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.026 [INFO][4876] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609 Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.030 [INFO][4876] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.037 [INFO][4876] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.037 [INFO][4876] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" host="localhost" Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.037 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:18.058529 containerd[1463]: 2025-04-30 03:35:18.037 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" HandleID="k8s-pod-network.e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Workload="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.041 [INFO][4842] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pk55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16f36b79-0754-4e9a-854f-8a255aa4e23b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7pk55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia81da8db88d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.041 [INFO][4842] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.041 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia81da8db88d ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.045 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.046 [INFO][4842] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7pk55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16f36b79-0754-4e9a-854f-8a255aa4e23b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609", Pod:"csi-node-driver-7pk55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia81da8db88d", MAC:"92:aa:43:0e:08:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:18.059842 containerd[1463]: 2025-04-30 03:35:18.055 [INFO][4842] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609" Namespace="calico-system" Pod="csi-node-driver-7pk55" WorkloadEndpoint="localhost-k8s-csi--node--driver--7pk55-eth0" Apr 30 03:35:18.086948 containerd[1463]: time="2025-04-30T03:35:18.086894170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-5gqn4,Uid:241582b1-4172-41e0-a757-624f1063d729,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493\"" Apr 30 03:35:18.088112 containerd[1463]: time="2025-04-30T03:35:18.086858182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:18.088112 containerd[1463]: time="2025-04-30T03:35:18.086913788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:18.088112 containerd[1463]: time="2025-04-30T03:35:18.086924097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:18.088254 containerd[1463]: time="2025-04-30T03:35:18.086995403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:18.088663 containerd[1463]: time="2025-04-30T03:35:18.088550453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:35:18.112891 systemd[1]: Started cri-containerd-e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609.scope - libcontainer container e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609. Apr 30 03:35:18.123904 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:18.137601 containerd[1463]: time="2025-04-30T03:35:18.137536053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pk55,Uid:16f36b79-0754-4e9a-854f-8a255aa4e23b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609\"" Apr 30 03:35:18.145710 systemd-networkd[1393]: califc9f0aa70c0: Link UP Apr 30 03:35:18.146149 systemd-networkd[1393]: califc9f0aa70c0: Gained carrier Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:17.871 [INFO][4855] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:17.883 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--7jcld-eth0 coredns-6f6b679f8f- kube-system cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32 1057 0 2025-04-30 03:34:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-7jcld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc9f0aa70c0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:17.883 [INFO][4855] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:17.929 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" HandleID="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.004 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" HandleID="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000434fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-7jcld", "timestamp":"2025-04-30 03:35:17.929807376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.004 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.038 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.038 [INFO][4883] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.108 [INFO][4883] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.112 [INFO][4883] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.119 [INFO][4883] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.124 [INFO][4883] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.126 [INFO][4883] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.126 [INFO][4883] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.128 [INFO][4883] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643 Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.131 [INFO][4883] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.139 [INFO][4883] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.139 [INFO][4883] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" host="localhost" Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.139 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:18.159273 containerd[1463]: 2025-04-30 03:35:18.139 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" HandleID="k8s-pod-network.c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Workload="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.142 [INFO][4855] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7jcld-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-7jcld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc9f0aa70c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.143 [INFO][4855] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.143 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc9f0aa70c0 ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.145 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.147 [INFO][4855] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7jcld-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643", Pod:"coredns-6f6b679f8f-7jcld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc9f0aa70c0", MAC:"0a:26:18:df:f6:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:18.160094 containerd[1463]: 2025-04-30 03:35:18.156 [INFO][4855] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643" Namespace="kube-system" Pod="coredns-6f6b679f8f-7jcld" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7jcld-eth0" Apr 30 03:35:18.180681 containerd[1463]: time="2025-04-30T03:35:18.180029677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:18.180681 containerd[1463]: time="2025-04-30T03:35:18.180660090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:18.180681 containerd[1463]: time="2025-04-30T03:35:18.180673356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:18.180795 containerd[1463]: time="2025-04-30T03:35:18.180753729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:18.203735 systemd[1]: Started cri-containerd-c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643.scope - libcontainer container c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643. Apr 30 03:35:18.205187 kubelet[2493]: E0430 03:35:18.205162 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:18.223754 systemd[1]: run-netns-cni\x2d4e7f95fa\x2d2679\x2d551d\x2d8a45\x2dc913210bb8b0.mount: Deactivated successfully. Apr 30 03:35:18.223890 systemd[1]: run-netns-cni\x2dad1b0d64\x2d02d7\x2dfc89\x2d1244\x2d2b6cc6744f97.mount: Deactivated successfully. Apr 30 03:35:18.223984 systemd[1]: run-netns-cni\x2ddb48a592\x2db51a\x2dba6d\x2d111f\x2dc787057d9572.mount: Deactivated successfully. Apr 30 03:35:18.232489 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:18.260448 containerd[1463]: time="2025-04-30T03:35:18.260401283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7jcld,Uid:cf00a3ce-f78a-4b3e-b1fb-0e2e59ff5a32,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643\"" Apr 30 03:35:18.261301 kubelet[2493]: E0430 03:35:18.261237 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:18.263665 containerd[1463]: time="2025-04-30T03:35:18.263636750Z" level=info msg="CreateContainer within sandbox \"c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:35:18.282466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009913546.mount: Deactivated successfully. Apr 30 03:35:18.286522 containerd[1463]: time="2025-04-30T03:35:18.286447123Z" level=info msg="CreateContainer within sandbox \"c8f1d35a7af31f9b09310d80e620a9c04537241747f5c66f6e13abc0be555643\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7cd949c0468bef7d5fef4f387c930b58b2d0846ed5192efeb4351db1d4597de\"" Apr 30 03:35:18.287544 containerd[1463]: time="2025-04-30T03:35:18.287152850Z" level=info msg="StartContainer for \"b7cd949c0468bef7d5fef4f387c930b58b2d0846ed5192efeb4351db1d4597de\"" Apr 30 03:35:18.315813 systemd[1]: Started cri-containerd-b7cd949c0468bef7d5fef4f387c930b58b2d0846ed5192efeb4351db1d4597de.scope - libcontainer container b7cd949c0468bef7d5fef4f387c930b58b2d0846ed5192efeb4351db1d4597de. Apr 30 03:35:18.534405 containerd[1463]: time="2025-04-30T03:35:18.534279326Z" level=info msg="StartContainer for \"b7cd949c0468bef7d5fef4f387c930b58b2d0846ed5192efeb4351db1d4597de\" returns successfully" Apr 30 03:35:19.009619 kernel: bpftool[5249]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:35:19.212539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267291759.mount: Deactivated successfully. Apr 30 03:35:19.214540 kubelet[2493]: E0430 03:35:19.213825 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:19.215838 kubelet[2493]: E0430 03:35:19.215676 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:19.228593 kubelet[2493]: I0430 03:35:19.228507 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gjxkw" podStartSLOduration=5.228489627 podStartE2EDuration="5.228489627s" podCreationTimestamp="2025-04-30 03:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:35:18.22538779 +0000 UTC m=+77.624276765" watchObservedRunningTime="2025-04-30 03:35:19.228489627 +0000 UTC m=+78.627378572" Apr 30 03:35:19.247357 systemd[1]: run-containerd-runc-k8s.io-22471a22194fb7be4e79606052d7618b1c77816476874d47e512c293795b31df-runc.jSgx9z.mount: Deactivated successfully. Apr 30 03:35:19.250828 kubelet[2493]: I0430 03:35:19.248231 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7jcld" podStartSLOduration=74.248210934 podStartE2EDuration="1m14.248210934s" podCreationTimestamp="2025-04-30 03:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:35:19.229101895 +0000 UTC m=+78.627990861" watchObservedRunningTime="2025-04-30 03:35:19.248210934 +0000 UTC m=+78.647099879" Apr 30 03:35:19.307856 systemd-networkd[1393]: vxlan.calico: Link UP Apr 30 03:35:19.307867 systemd-networkd[1393]: vxlan.calico: Gained carrier Apr 30 03:35:19.807858 systemd-networkd[1393]: cali2b880680f34: Gained IPv6LL Apr 30 03:35:19.871857 systemd-networkd[1393]: calia81da8db88d: Gained IPv6LL Apr 30 03:35:20.191749 systemd-networkd[1393]: califc9f0aa70c0: Gained IPv6LL Apr 30 03:35:20.216036 kubelet[2493]: E0430 03:35:20.215983 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:20.575897 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Apr 30 03:35:20.701810 containerd[1463]: time="2025-04-30T03:35:20.701570355Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\"" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.801 [INFO][5376] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.802 [INFO][5376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" iface="eth0" netns="/var/run/netns/cni-c631bbb3-a659-958f-342a-317d16a8ce18" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.803 [INFO][5376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" iface="eth0" netns="/var/run/netns/cni-c631bbb3-a659-958f-342a-317d16a8ce18" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.804 [INFO][5376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" iface="eth0" netns="/var/run/netns/cni-c631bbb3-a659-958f-342a-317d16a8ce18" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.804 [INFO][5376] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.804 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.838 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" HandleID="k8s-pod-network.a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.839 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.839 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.846 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" HandleID="k8s-pod-network.a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.846 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" HandleID="k8s-pod-network.a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.849 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:20.855505 containerd[1463]: 2025-04-30 03:35:20.852 [INFO][5376] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9" Apr 30 03:35:20.857890 containerd[1463]: time="2025-04-30T03:35:20.857828257Z" level=info msg="TearDown network for sandbox \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" successfully" Apr 30 03:35:20.857890 containerd[1463]: time="2025-04-30T03:35:20.857869807Z" level=info msg="StopPodSandbox for \"a79b55c07d0446c33276a2d92de383e8d2a20cefb64fffe0947cf9f3cf8425e9\" returns successfully" Apr 30 03:35:20.858273 kubelet[2493]: E0430 03:35:20.858236 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:20.858962 systemd[1]: run-netns-cni\x2dc631bbb3\x2da659\x2d958f\x2d342a\x2d317d16a8ce18.mount: Deactivated successfully. Apr 30 03:35:20.859411 containerd[1463]: time="2025-04-30T03:35:20.859190616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw6pw,Uid:5f3172f0-7cdf-426e-b2bc-b5e5053a3b93,Namespace:kube-system,Attempt:1,}" Apr 30 03:35:20.916298 containerd[1463]: time="2025-04-30T03:35:20.916225377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:20.917474 containerd[1463]: time="2025-04-30T03:35:20.917417421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:35:20.918766 containerd[1463]: time="2025-04-30T03:35:20.918699096Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:20.922229 containerd[1463]: time="2025-04-30T03:35:20.922054096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:20.922740 containerd[1463]: time="2025-04-30T03:35:20.922706050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.834059454s" Apr 30 03:35:20.922818 containerd[1463]: time="2025-04-30T03:35:20.922741658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:35:20.924744 containerd[1463]: time="2025-04-30T03:35:20.924710614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:35:20.926464 containerd[1463]: time="2025-04-30T03:35:20.926429373Z" level=info msg="CreateContainer within sandbox \"f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:35:20.944468 containerd[1463]: time="2025-04-30T03:35:20.944414191Z" level=info msg="CreateContainer within sandbox \"f684eddec8b18314ef5fcbe4a9178d8fc0797c22f0413c16d1301d9c91764493\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f14e55511914b2d6000b89d5a5ba7a4a9d92c5a163c77666ec06f32237afc3aa\"" Apr 30 03:35:20.945437 containerd[1463]: time="2025-04-30T03:35:20.945409370Z" level=info msg="StartContainer for \"f14e55511914b2d6000b89d5a5ba7a4a9d92c5a163c77666ec06f32237afc3aa\"" Apr 30 03:35:20.980866 systemd[1]: Started cri-containerd-f14e55511914b2d6000b89d5a5ba7a4a9d92c5a163c77666ec06f32237afc3aa.scope - libcontainer container f14e55511914b2d6000b89d5a5ba7a4a9d92c5a163c77666ec06f32237afc3aa. Apr 30 03:35:21.029284 systemd-networkd[1393]: caliaf51cb8dbbb: Link UP Apr 30 03:35:21.029790 systemd-networkd[1393]: caliaf51cb8dbbb: Gained carrier Apr 30 03:35:21.045357 containerd[1463]: time="2025-04-30T03:35:21.045269268Z" level=info msg="StartContainer for \"f14e55511914b2d6000b89d5a5ba7a4a9d92c5a163c77666ec06f32237afc3aa\" returns successfully" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.951 [INFO][5396] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0 coredns-6f6b679f8f- kube-system 5f3172f0-7cdf-426e-b2bc-b5e5053a3b93 1116 0 2025-04-30 03:34:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-sw6pw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf51cb8dbbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.951 [INFO][5396] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.983 [INFO][5419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" HandleID="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.992 [INFO][5419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" HandleID="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030c330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-sw6pw", "timestamp":"2025-04-30 03:35:20.98356181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.992 [INFO][5419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.992 [INFO][5419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.992 [INFO][5419] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.994 [INFO][5419] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:20.997 [INFO][5419] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.002 [INFO][5419] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.004 [INFO][5419] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.005 [INFO][5419] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.005 [INFO][5419] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.007 [INFO][5419] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948 Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.012 [INFO][5419] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.020 [INFO][5419] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.020 [INFO][5419] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" host="localhost" Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.020 [INFO][5419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:21.054635 containerd[1463]: 2025-04-30 03:35:21.020 [INFO][5419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" HandleID="k8s-pod-network.0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Workload="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.024 [INFO][5396] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-sw6pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf51cb8dbbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.024 [INFO][5396] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.024 [INFO][5396] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf51cb8dbbb ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.030 [INFO][5396] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.031 [INFO][5396] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5f3172f0-7cdf-426e-b2bc-b5e5053a3b93", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948", Pod:"coredns-6f6b679f8f-sw6pw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf51cb8dbbb", MAC:"26:9a:5f:13:de:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:21.055205 containerd[1463]: 2025-04-30 03:35:21.044 [INFO][5396] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948" Namespace="kube-system" Pod="coredns-6f6b679f8f-sw6pw" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--sw6pw-eth0" Apr 30 03:35:21.058977 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). Apr 30 03:35:21.092641 containerd[1463]: time="2025-04-30T03:35:21.092490050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:21.092641 containerd[1463]: time="2025-04-30T03:35:21.092548402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:21.092641 containerd[1463]: time="2025-04-30T03:35:21.092559272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:21.092820 containerd[1463]: time="2025-04-30T03:35:21.092663080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:21.110174 sshd[5458]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:21.112442 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:21.113761 systemd[1]: Started cri-containerd-0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948.scope - libcontainer container 0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948. Apr 30 03:35:21.118754 systemd-logind[1444]: New session 20 of user core. Apr 30 03:35:21.120882 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:35:21.132297 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:21.160161 containerd[1463]: time="2025-04-30T03:35:21.160114908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sw6pw,Uid:5f3172f0-7cdf-426e-b2bc-b5e5053a3b93,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948\"" Apr 30 03:35:21.161063 kubelet[2493]: E0430 03:35:21.160861 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:21.164310 containerd[1463]: time="2025-04-30T03:35:21.164273278Z" level=info msg="CreateContainer within sandbox \"0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:35:21.212484 containerd[1463]: time="2025-04-30T03:35:21.212235663Z" level=info msg="CreateContainer within sandbox \"0ca4866e240eef20bfa741a730e34a022e13d06432e919fec27be2c5df2c4948\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcb5740ee8ab88a3037ee84664e7f472837f775ec296e65a16ac598480b6ef0c\"" Apr 30 03:35:21.214549 containerd[1463]: time="2025-04-30T03:35:21.214517093Z" level=info msg="StartContainer for \"dcb5740ee8ab88a3037ee84664e7f472837f775ec296e65a16ac598480b6ef0c\"" Apr 30 03:35:21.234280 kubelet[2493]: E0430 03:35:21.234246 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:21.238402 kubelet[2493]: I0430 03:35:21.238324 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc8df768b-5gqn4" podStartSLOduration=65.402630837 podStartE2EDuration="1m8.238283311s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="2025-04-30 03:35:18.088326214 +0000 UTC m=+77.487215159" lastFinishedPulling="2025-04-30 03:35:20.923978688 +0000 UTC m=+80.322867633" observedRunningTime="2025-04-30 03:35:21.237791474 +0000 UTC m=+80.636680409" watchObservedRunningTime="2025-04-30 03:35:21.238283311 +0000 UTC m=+80.637172256" Apr 30 03:35:21.255928 systemd[1]: Started cri-containerd-dcb5740ee8ab88a3037ee84664e7f472837f775ec296e65a16ac598480b6ef0c.scope - libcontainer container dcb5740ee8ab88a3037ee84664e7f472837f775ec296e65a16ac598480b6ef0c. Apr 30 03:35:21.365746 sshd[5458]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:21.370898 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:35:21.371661 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:49978.service: Deactivated successfully. Apr 30 03:35:21.374975 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:35:21.376120 systemd-logind[1444]: Removed session 20. Apr 30 03:35:21.403298 containerd[1463]: time="2025-04-30T03:35:21.403238400Z" level=info msg="StartContainer for \"dcb5740ee8ab88a3037ee84664e7f472837f775ec296e65a16ac598480b6ef0c\" returns successfully" Apr 30 03:35:22.111799 systemd-networkd[1393]: caliaf51cb8dbbb: Gained IPv6LL Apr 30 03:35:22.238138 kubelet[2493]: E0430 03:35:22.238094 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:23.240653 kubelet[2493]: I0430 03:35:23.240566 2493 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:35:23.241549 kubelet[2493]: E0430 03:35:23.240924 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:23.440977 kubelet[2493]: I0430 03:35:23.440891 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sw6pw" podStartSLOduration=78.440847499 podStartE2EDuration="1m18.440847499s" podCreationTimestamp="2025-04-30 03:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:35:22.545484325 +0000 UTC m=+81.944373270" watchObservedRunningTime="2025-04-30 03:35:23.440847499 +0000 UTC m=+82.839736444" Apr 30 03:35:23.700524 containerd[1463]: time="2025-04-30T03:35:23.700476893Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\"" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.857 [INFO][5591] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.857 [INFO][5591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" iface="eth0" netns="/var/run/netns/cni-cf26c80b-9576-0ce9-5f1e-c93f4b96bd7a" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.858 [INFO][5591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" iface="eth0" netns="/var/run/netns/cni-cf26c80b-9576-0ce9-5f1e-c93f4b96bd7a" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.858 [INFO][5591] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" iface="eth0" netns="/var/run/netns/cni-cf26c80b-9576-0ce9-5f1e-c93f4b96bd7a" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.858 [INFO][5591] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.858 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.877 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" HandleID="k8s-pod-network.9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.877 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.877 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.883 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" HandleID="k8s-pod-network.9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.884 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" HandleID="k8s-pod-network.9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.885 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:23.890927 containerd[1463]: 2025-04-30 03:35:23.887 [INFO][5591] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666" Apr 30 03:35:23.891629 containerd[1463]: time="2025-04-30T03:35:23.891442724Z" level=info msg="TearDown network for sandbox \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" successfully" Apr 30 03:35:23.891629 containerd[1463]: time="2025-04-30T03:35:23.891479754Z" level=info msg="StopPodSandbox for \"9221d5d0c182676ba461e98bfc40d4596eacd8d395e1a4e2eb77c7cd25a49666\" returns successfully" Apr 30 03:35:23.892301 containerd[1463]: time="2025-04-30T03:35:23.892279959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669b88b944-thr8d,Uid:0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281,Namespace:calico-system,Attempt:1,}" Apr 30 03:35:23.893731 systemd[1]: run-netns-cni\x2dcf26c80b\x2d9576\x2d0ce9\x2d5f1e\x2dc93f4b96bd7a.mount: Deactivated successfully. Apr 30 03:35:24.242931 kubelet[2493]: E0430 03:35:24.242895 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:24.860132 systemd-networkd[1393]: calibed4c4a4d3c: Link UP Apr 30 03:35:24.860411 systemd-networkd[1393]: calibed4c4a4d3c: Gained carrier Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.744 [INFO][5616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0 calico-kube-controllers-669b88b944- calico-system 0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281 1148 0 2025-04-30 03:34:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669b88b944 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-669b88b944-thr8d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibed4c4a4d3c [] []}} ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.744 [INFO][5616] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.778 [INFO][5631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" HandleID="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.788 [INFO][5631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" HandleID="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-669b88b944-thr8d", "timestamp":"2025-04-30 03:35:24.778864206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.788 [INFO][5631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.788 [INFO][5631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.789 [INFO][5631] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.791 [INFO][5631] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.795 [INFO][5631] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.799 [INFO][5631] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.800 [INFO][5631] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.802 [INFO][5631] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.802 [INFO][5631] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.803 [INFO][5631] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6 Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.843 [INFO][5631] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.854 [INFO][5631] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.854 [INFO][5631] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" host="localhost" Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.854 [INFO][5631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:24.889142 containerd[1463]: 2025-04-30 03:35:24.854 [INFO][5631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" HandleID="k8s-pod-network.c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Workload="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.857 [INFO][5616] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0", GenerateName:"calico-kube-controllers-669b88b944-", Namespace:"calico-system", SelfLink:"", UID:"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669b88b944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-669b88b944-thr8d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibed4c4a4d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.858 [INFO][5616] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.858 [INFO][5616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed4c4a4d3c ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.860 [INFO][5616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.861 [INFO][5616] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0", GenerateName:"calico-kube-controllers-669b88b944-", Namespace:"calico-system", SelfLink:"", UID:"0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669b88b944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6", Pod:"calico-kube-controllers-669b88b944-thr8d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibed4c4a4d3c", MAC:"7e:d9:40:86:6a:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:24.894572 containerd[1463]: 2025-04-30 03:35:24.881 [INFO][5616] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6" Namespace="calico-system" Pod="calico-kube-controllers-669b88b944-thr8d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669b88b944--thr8d-eth0" Apr 30 03:35:24.915563 containerd[1463]: time="2025-04-30T03:35:24.915455738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:24.915563 containerd[1463]: time="2025-04-30T03:35:24.915513979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:24.915763 containerd[1463]: time="2025-04-30T03:35:24.915661821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:24.915868 containerd[1463]: time="2025-04-30T03:35:24.915833077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:24.942722 systemd[1]: Started cri-containerd-c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6.scope - libcontainer container c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6. Apr 30 03:35:24.962623 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:24.988558 containerd[1463]: time="2025-04-30T03:35:24.988495194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669b88b944-thr8d,Uid:0a7a292c-5ed8-4ffb-8ca7-ff54dcfc3281,Namespace:calico-system,Attempt:1,} returns sandbox id \"c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6\"" Apr 30 03:35:25.246505 kubelet[2493]: E0430 03:35:25.246349 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:25.713269 containerd[1463]: time="2025-04-30T03:35:25.713203654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:25.766481 containerd[1463]: time="2025-04-30T03:35:25.766379491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:35:25.768425 containerd[1463]: time="2025-04-30T03:35:25.768361284Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:25.772726 containerd[1463]: time="2025-04-30T03:35:25.772685717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:25.773420 containerd[1463]: time="2025-04-30T03:35:25.773388244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 4.848646621s" Apr 30 03:35:25.773462 containerd[1463]: time="2025-04-30T03:35:25.773422228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:35:25.775084 containerd[1463]: time="2025-04-30T03:35:25.774850508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:35:25.776317 containerd[1463]: time="2025-04-30T03:35:25.776288055Z" level=info msg="CreateContainer within sandbox \"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:35:25.802695 containerd[1463]: time="2025-04-30T03:35:25.802648099Z" level=info msg="CreateContainer within sandbox \"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6693e1bf892f68df25ce3abd4027a1e9d66bb865ad7c1608a87fcff199569373\"" Apr 30 03:35:25.803221 containerd[1463]: time="2025-04-30T03:35:25.803198367Z" level=info msg="StartContainer for \"6693e1bf892f68df25ce3abd4027a1e9d66bb865ad7c1608a87fcff199569373\"" Apr 30 03:35:25.832786 systemd[1]: Started cri-containerd-6693e1bf892f68df25ce3abd4027a1e9d66bb865ad7c1608a87fcff199569373.scope - libcontainer container 6693e1bf892f68df25ce3abd4027a1e9d66bb865ad7c1608a87fcff199569373. Apr 30 03:35:25.868665 containerd[1463]: time="2025-04-30T03:35:25.868546327Z" level=info msg="StartContainer for \"6693e1bf892f68df25ce3abd4027a1e9d66bb865ad7c1608a87fcff199569373\" returns successfully" Apr 30 03:35:26.378767 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:49986.service - OpenSSH per-connection server daemon (10.0.0.1:49986). Apr 30 03:35:26.417176 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 49986 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:26.418850 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:26.422570 systemd-logind[1444]: New session 21 of user core. Apr 30 03:35:26.431731 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:35:26.548554 sshd[5734]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:26.563170 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:49986.service: Deactivated successfully. Apr 30 03:35:26.565338 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:35:26.567058 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:35:26.576853 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:52308.service - OpenSSH per-connection server daemon (10.0.0.1:52308). Apr 30 03:35:26.577769 systemd-logind[1444]: Removed session 21. Apr 30 03:35:26.606340 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 52308 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:26.608052 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:26.612231 systemd-logind[1444]: New session 22 of user core. Apr 30 03:35:26.621761 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:35:26.701439 kubelet[2493]: E0430 03:35:26.700899 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:26.702125 containerd[1463]: time="2025-04-30T03:35:26.702079065Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\"" Apr 30 03:35:26.847774 systemd-networkd[1393]: calibed4c4a4d3c: Gained IPv6LL Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" iface="eth0" netns="/var/run/netns/cni-2c99edc2-438f-f84b-416f-3e06a6dff473" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" iface="eth0" netns="/var/run/netns/cni-2c99edc2-438f-f84b-416f-3e06a6dff473" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" iface="eth0" netns="/var/run/netns/cni-2c99edc2-438f-f84b-416f-3e06a6dff473" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.915 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.936 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" HandleID="k8s-pod-network.7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.937 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.937 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.942 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" HandleID="k8s-pod-network.7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.942 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" HandleID="k8s-pod-network.7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.944 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:26.950645 containerd[1463]: 2025-04-30 03:35:26.947 [INFO][5772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c" Apr 30 03:35:26.951925 containerd[1463]: time="2025-04-30T03:35:26.951798141Z" level=info msg="TearDown network for sandbox \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" successfully" Apr 30 03:35:26.951925 containerd[1463]: time="2025-04-30T03:35:26.951867202Z" level=info msg="StopPodSandbox for \"7c66771cb4d737b31b78973ad85565aced1e9dc9d5f97b368db95e166434946c\" returns successfully" Apr 30 03:35:26.953098 containerd[1463]: time="2025-04-30T03:35:26.953063670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-znvmt,Uid:ddbf64df-5b81-40d9-b056-7dac1c53f65d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:35:26.953731 systemd[1]: run-netns-cni\x2d2c99edc2\x2d438f\x2df84b\x2d416f\x2d3e06a6dff473.mount: Deactivated successfully. Apr 30 03:35:27.120019 systemd-networkd[1393]: cali95ae7356cab: Link UP Apr 30 03:35:27.122062 systemd-networkd[1393]: cali95ae7356cab: Gained carrier Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.038 [INFO][5790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0 calico-apiserver-6fc8df768b- calico-apiserver ddbf64df-5b81-40d9-b056-7dac1c53f65d 1178 0 2025-04-30 03:34:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc8df768b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fc8df768b-znvmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95ae7356cab [] []}} ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.038 [INFO][5790] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.074 [INFO][5805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" HandleID="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.082 [INFO][5805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" HandleID="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002897d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fc8df768b-znvmt", "timestamp":"2025-04-30 03:35:27.074554091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.082 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.082 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.082 [INFO][5805] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.085 [INFO][5805] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.088 [INFO][5805] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.093 [INFO][5805] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.095 [INFO][5805] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.097 [INFO][5805] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.097 [INFO][5805] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.099 [INFO][5805] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663 Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.103 [INFO][5805] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.114 [INFO][5805] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.114 [INFO][5805] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" host="localhost" Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.114 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:35:27.139072 containerd[1463]: 2025-04-30 03:35:27.114 [INFO][5805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" HandleID="k8s-pod-network.650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Workload="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.117 [INFO][5790] cni-plugin/k8s.go 386: Populated endpoint ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0", GenerateName:"calico-apiserver-6fc8df768b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddbf64df-5b81-40d9-b056-7dac1c53f65d", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc8df768b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fc8df768b-znvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95ae7356cab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.117 [INFO][5790] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.117 [INFO][5790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95ae7356cab ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.120 [INFO][5790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.121 [INFO][5790] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0", GenerateName:"calico-apiserver-6fc8df768b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ddbf64df-5b81-40d9-b056-7dac1c53f65d", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc8df768b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663", Pod:"calico-apiserver-6fc8df768b-znvmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95ae7356cab", MAC:"12:18:9c:76:2c:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:35:27.139731 containerd[1463]: 2025-04-30 03:35:27.133 [INFO][5790] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663" Namespace="calico-apiserver" Pod="calico-apiserver-6fc8df768b-znvmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fc8df768b--znvmt-eth0" Apr 30 03:35:27.166341 containerd[1463]: time="2025-04-30T03:35:27.165536420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:35:27.166341 containerd[1463]: time="2025-04-30T03:35:27.166213127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:35:27.166341 containerd[1463]: time="2025-04-30T03:35:27.166228046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:27.166846 containerd[1463]: time="2025-04-30T03:35:27.166330350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:35:27.182075 sshd[5748]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:27.198844 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:52308.service: Deactivated successfully. Apr 30 03:35:27.201178 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:35:27.203214 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:35:27.210763 systemd[1]: Started cri-containerd-650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663.scope - libcontainer container 650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663. Apr 30 03:35:27.212618 systemd[1]: Started sshd@22-10.0.0.146:22-10.0.0.1:52318.service - OpenSSH per-connection server daemon (10.0.0.1:52318). Apr 30 03:35:27.214419 systemd-logind[1444]: Removed session 22. Apr 30 03:35:27.226224 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:35:27.251198 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 52318 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:27.251990 containerd[1463]: time="2025-04-30T03:35:27.251941532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc8df768b-znvmt,Uid:ddbf64df-5b81-40d9-b056-7dac1c53f65d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663\"" Apr 30 03:35:27.253838 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:27.256091 containerd[1463]: time="2025-04-30T03:35:27.256054807Z" level=info msg="CreateContainer within sandbox \"650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:35:27.259234 systemd-logind[1444]: New session 23 of user core. Apr 30 03:35:27.263753 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:35:27.272910 containerd[1463]: time="2025-04-30T03:35:27.272864897Z" level=info msg="CreateContainer within sandbox \"650be7c7284b6c63687ec43afadddca16487306b204434c8c36dd1992995f663\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aea9f8c2f802b7f5b35f636e4890f010001a36a9ecf5a7cb803943621bbc6be5\"" Apr 30 03:35:27.273725 containerd[1463]: time="2025-04-30T03:35:27.273685649Z" level=info msg="StartContainer for \"aea9f8c2f802b7f5b35f636e4890f010001a36a9ecf5a7cb803943621bbc6be5\"" Apr 30 03:35:27.308852 systemd[1]: Started cri-containerd-aea9f8c2f802b7f5b35f636e4890f010001a36a9ecf5a7cb803943621bbc6be5.scope - libcontainer container aea9f8c2f802b7f5b35f636e4890f010001a36a9ecf5a7cb803943621bbc6be5. Apr 30 03:35:27.367814 containerd[1463]: time="2025-04-30T03:35:27.367743800Z" level=info msg="StartContainer for \"aea9f8c2f802b7f5b35f636e4890f010001a36a9ecf5a7cb803943621bbc6be5\" returns successfully" Apr 30 03:35:28.312896 kubelet[2493]: I0430 03:35:28.312806 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc8df768b-znvmt" podStartSLOduration=75.312782227 podStartE2EDuration="1m15.312782227s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:35:28.312676366 +0000 UTC m=+87.711565311" watchObservedRunningTime="2025-04-30 03:35:28.312782227 +0000 UTC m=+87.711671172" Apr 30 03:35:29.023821 systemd-networkd[1393]: cali95ae7356cab: Gained IPv6LL Apr 30 03:35:29.162240 containerd[1463]: time="2025-04-30T03:35:29.162169131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:29.233856 containerd[1463]: time="2025-04-30T03:35:29.233767992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:35:29.257102 containerd[1463]: time="2025-04-30T03:35:29.256944505Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:29.307411 containerd[1463]: time="2025-04-30T03:35:29.307331810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:29.308029 containerd[1463]: time="2025-04-30T03:35:29.307974592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.533087926s" Apr 30 03:35:29.308029 containerd[1463]: time="2025-04-30T03:35:29.308022112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:35:29.309822 containerd[1463]: time="2025-04-30T03:35:29.309796236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:35:29.322124 containerd[1463]: time="2025-04-30T03:35:29.322057977Z" level=info msg="CreateContainer within sandbox \"c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:35:29.456362 containerd[1463]: time="2025-04-30T03:35:29.456288623Z" level=info msg="CreateContainer within sandbox \"c77d2370e4f07683670940ee85d08f7f26156aa1adb7196ff3238e6cff31d6e6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"52f26371e5391fe9d764edb48d1ec0142d8363f189754e006ebbfba284fab930\"" Apr 30 03:35:29.457131 containerd[1463]: time="2025-04-30T03:35:29.456981771Z" level=info msg="StartContainer for \"52f26371e5391fe9d764edb48d1ec0142d8363f189754e006ebbfba284fab930\"" Apr 30 03:35:29.514812 systemd[1]: Started cri-containerd-52f26371e5391fe9d764edb48d1ec0142d8363f189754e006ebbfba284fab930.scope - libcontainer container 52f26371e5391fe9d764edb48d1ec0142d8363f189754e006ebbfba284fab930. Apr 30 03:35:29.683345 containerd[1463]: time="2025-04-30T03:35:29.683183689Z" level=info msg="StartContainer for \"52f26371e5391fe9d764edb48d1ec0142d8363f189754e006ebbfba284fab930\" returns successfully" Apr 30 03:35:29.700287 kubelet[2493]: E0430 03:35:29.700252 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:29.744774 sshd[5856]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:29.756725 systemd[1]: Started sshd@23-10.0.0.146:22-10.0.0.1:52320.service - OpenSSH per-connection server daemon (10.0.0.1:52320). Apr 30 03:35:29.757149 systemd[1]: sshd@22-10.0.0.146:22-10.0.0.1:52318.service: Deactivated successfully. Apr 30 03:35:29.759825 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:35:29.760536 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:35:29.762026 systemd-logind[1444]: Removed session 23. Apr 30 03:35:29.808300 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 52320 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:29.810723 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:29.818207 systemd-logind[1444]: New session 24 of user core. Apr 30 03:35:29.829761 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:35:30.237051 sshd[5978]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:30.249097 systemd[1]: sshd@23-10.0.0.146:22-10.0.0.1:52320.service: Deactivated successfully. Apr 30 03:35:30.251486 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:35:30.252637 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:35:30.263087 systemd[1]: Started sshd@24-10.0.0.146:22-10.0.0.1:52322.service - OpenSSH per-connection server daemon (10.0.0.1:52322). Apr 30 03:35:30.265808 systemd-logind[1444]: Removed session 24. Apr 30 03:35:30.285764 kubelet[2493]: I0430 03:35:30.285066 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669b88b944-thr8d" podStartSLOduration=72.96611447 podStartE2EDuration="1m17.285016424s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="2025-04-30 03:35:24.990166307 +0000 UTC m=+84.389055252" lastFinishedPulling="2025-04-30 03:35:29.309068261 +0000 UTC m=+88.707957206" observedRunningTime="2025-04-30 03:35:30.284964865 +0000 UTC m=+89.683853820" watchObservedRunningTime="2025-04-30 03:35:30.285016424 +0000 UTC m=+89.683905369" Apr 30 03:35:30.294749 sshd[5993]: Accepted publickey for core from 10.0.0.1 port 52322 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:30.296899 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:30.303023 systemd-logind[1444]: New session 25 of user core. Apr 30 03:35:30.312874 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:35:30.481771 sshd[5993]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:30.485880 systemd[1]: sshd@24-10.0.0.146:22-10.0.0.1:52322.service: Deactivated successfully. Apr 30 03:35:30.488077 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:35:30.488886 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:35:30.489851 systemd-logind[1444]: Removed session 25. Apr 30 03:35:31.833473 containerd[1463]: time="2025-04-30T03:35:31.833418937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:31.834293 containerd[1463]: time="2025-04-30T03:35:31.834243343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:35:31.835620 containerd[1463]: time="2025-04-30T03:35:31.835572118Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:31.837965 containerd[1463]: time="2025-04-30T03:35:31.837925759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:35:31.838595 containerd[1463]: time="2025-04-30T03:35:31.838543744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.528526569s" Apr 30 03:35:31.838652 containerd[1463]: time="2025-04-30T03:35:31.838593849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:35:31.840892 containerd[1463]: time="2025-04-30T03:35:31.840852942Z" level=info msg="CreateContainer within sandbox \"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:35:31.865781 containerd[1463]: time="2025-04-30T03:35:31.865732201Z" level=info msg="CreateContainer within sandbox \"e98a0ca645385bfdc3cf8761b5045aa37e94ff8de2eb4bcfd965bb0436344609\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a74b2700e6947b78d741cc2198fa2cc651f3c87f3082b5e096fd3d16be1d5c75\"" Apr 30 03:35:31.866535 containerd[1463]: time="2025-04-30T03:35:31.866256828Z" level=info msg="StartContainer for \"a74b2700e6947b78d741cc2198fa2cc651f3c87f3082b5e096fd3d16be1d5c75\"" Apr 30 03:35:31.916768 systemd[1]: Started cri-containerd-a74b2700e6947b78d741cc2198fa2cc651f3c87f3082b5e096fd3d16be1d5c75.scope - libcontainer container a74b2700e6947b78d741cc2198fa2cc651f3c87f3082b5e096fd3d16be1d5c75. Apr 30 03:35:31.956206 containerd[1463]: time="2025-04-30T03:35:31.956138370Z" level=info msg="StartContainer for \"a74b2700e6947b78d741cc2198fa2cc651f3c87f3082b5e096fd3d16be1d5c75\" returns successfully" Apr 30 03:35:32.289757 kubelet[2493]: I0430 03:35:32.289684 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7pk55" podStartSLOduration=65.590165461 podStartE2EDuration="1m19.289661439s" podCreationTimestamp="2025-04-30 03:34:13 +0000 UTC" firstStartedPulling="2025-04-30 03:35:18.139732136 +0000 UTC m=+77.538621091" lastFinishedPulling="2025-04-30 03:35:31.839228124 +0000 UTC m=+91.238117069" observedRunningTime="2025-04-30 03:35:32.289046961 +0000 UTC m=+91.687935906" watchObservedRunningTime="2025-04-30 03:35:32.289661439 +0000 UTC m=+91.688550384" Apr 30 03:35:32.701078 kubelet[2493]: E0430 03:35:32.700942 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:32.804979 kubelet[2493]: I0430 03:35:32.804927 2493 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:35:32.805130 kubelet[2493]: I0430 03:35:32.804999 2493 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:35:35.503047 systemd[1]: Started sshd@25-10.0.0.146:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). Apr 30 03:35:35.542371 sshd[6075]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:35.544632 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:35.549090 systemd-logind[1444]: New session 26 of user core. Apr 30 03:35:35.559776 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:35:35.744564 sshd[6075]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:35.748725 systemd[1]: sshd@25-10.0.0.146:22-10.0.0.1:52416.service: Deactivated successfully. Apr 30 03:35:35.750980 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:35:35.751757 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:35:35.752820 systemd-logind[1444]: Removed session 26. Apr 30 03:35:38.700574 kubelet[2493]: E0430 03:35:38.700458 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:38.700574 kubelet[2493]: E0430 03:35:38.700511 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:40.762926 systemd[1]: Started sshd@26-10.0.0.146:22-10.0.0.1:35740.service - OpenSSH per-connection server daemon (10.0.0.1:35740). Apr 30 03:35:40.796635 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 35740 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:40.798405 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:40.803050 systemd-logind[1444]: New session 27 of user core. Apr 30 03:35:40.820908 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:35:40.954232 sshd[6103]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:40.959066 systemd[1]: sshd@26-10.0.0.146:22-10.0.0.1:35740.service: Deactivated successfully. Apr 30 03:35:40.961286 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:35:40.962119 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:35:40.963149 systemd-logind[1444]: Removed session 27. Apr 30 03:35:44.715694 kubelet[2493]: E0430 03:35:44.715650 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:35:45.966845 systemd[1]: Started sshd@27-10.0.0.146:22-10.0.0.1:35756.service - OpenSSH per-connection server daemon (10.0.0.1:35756). Apr 30 03:35:46.012004 sshd[6140]: Accepted publickey for core from 10.0.0.1 port 35756 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:46.013975 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:46.018808 systemd-logind[1444]: New session 28 of user core. Apr 30 03:35:46.031756 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:35:46.204622 sshd[6140]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:46.209161 systemd[1]: sshd@27-10.0.0.146:22-10.0.0.1:35756.service: Deactivated successfully. Apr 30 03:35:46.211146 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:35:46.211751 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:35:46.212694 systemd-logind[1444]: Removed session 28. Apr 30 03:35:51.215711 systemd[1]: Started sshd@28-10.0.0.146:22-10.0.0.1:37938.service - OpenSSH per-connection server daemon (10.0.0.1:37938). Apr 30 03:35:51.251886 sshd[6157]: Accepted publickey for core from 10.0.0.1 port 37938 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:51.253684 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:51.257948 systemd-logind[1444]: New session 29 of user core. Apr 30 03:35:51.262738 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 03:35:51.404348 sshd[6157]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:51.408205 systemd[1]: sshd@28-10.0.0.146:22-10.0.0.1:37938.service: Deactivated successfully. Apr 30 03:35:51.410163 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 03:35:51.410940 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Apr 30 03:35:51.411786 systemd-logind[1444]: Removed session 29. Apr 30 03:35:56.415853 systemd[1]: Started sshd@29-10.0.0.146:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). Apr 30 03:35:56.462162 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:35:56.463938 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:35:56.468346 systemd-logind[1444]: New session 30 of user core. Apr 30 03:35:56.475756 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 03:35:56.644535 sshd[6194]: pam_unix(sshd:session): session closed for user core Apr 30 03:35:56.651148 systemd[1]: sshd@29-10.0.0.146:22-10.0.0.1:37954.service: Deactivated successfully. Apr 30 03:35:56.653606 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 03:35:56.654457 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Apr 30 03:35:56.655772 systemd-logind[1444]: Removed session 30.