Apr 30 03:28:03.080186 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:28:03.080218 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.080235 kernel: BIOS-provided physical RAM map: Apr 30 03:28:03.080244 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:28:03.080253 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:28:03.080262 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:28:03.080273 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 30 03:28:03.080283 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 30 03:28:03.080292 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:28:03.080305 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:28:03.080315 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:28:03.080324 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:28:03.080339 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:28:03.080349 kernel: NX (Execute Disable) protection: active Apr 30 03:28:03.080360 kernel: APIC: Static calls initialized Apr 30 03:28:03.080378 kernel: SMBIOS 2.8 present. Apr 30 03:28:03.080389 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 30 03:28:03.080398 kernel: Hypervisor detected: KVM Apr 30 03:28:03.080408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:28:03.080418 kernel: kvm-clock: using sched offset of 3139767485 cycles Apr 30 03:28:03.080429 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:28:03.080439 kernel: tsc: Detected 2794.748 MHz processor Apr 30 03:28:03.080450 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:28:03.080461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:28:03.080476 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Apr 30 03:28:03.080487 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:28:03.080497 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:28:03.080508 kernel: Using GB pages for direct mapping Apr 30 03:28:03.080519 kernel: ACPI: Early table checksum verification disabled Apr 30 03:28:03.080529 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 30 03:28:03.080540 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080550 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080561 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080576 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 30 03:28:03.080588 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080600 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080613 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080623 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:28:03.080634 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Apr 30 03:28:03.080645 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Apr 30 03:28:03.080661 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 30 03:28:03.080675 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Apr 30 03:28:03.080686 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Apr 30 03:28:03.080697 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Apr 30 03:28:03.080718 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Apr 30 03:28:03.080732 kernel: No NUMA configuration found Apr 30 03:28:03.080743 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 30 03:28:03.080759 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 30 03:28:03.080800 kernel: Zone ranges: Apr 30 03:28:03.080831 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:28:03.080843 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 30 03:28:03.080854 kernel: Normal empty Apr 30 03:28:03.080865 kernel: Movable zone start for each node Apr 30 03:28:03.080876 kernel: Early memory node ranges Apr 30 03:28:03.080887 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:28:03.080917 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 30 03:28:03.080928 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 30 03:28:03.080950 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:28:03.080965 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:28:03.080976 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:28:03.080987 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:28:03.080998 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:28:03.081009 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:28:03.081020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:28:03.081031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:28:03.081042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:28:03.081059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:28:03.081070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:28:03.081081 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:28:03.081092 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:28:03.081104 kernel: TSC deadline timer available Apr 30 03:28:03.081115 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 03:28:03.081126 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:28:03.081137 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 03:28:03.081151 kernel: kvm-guest: setup PV sched yield Apr 30 03:28:03.081166 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:28:03.081177 kernel: Booting paravirtualized kernel on KVM Apr 30 03:28:03.081189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:28:03.081199 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 03:28:03.081211 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 03:28:03.081222 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 03:28:03.081233 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 03:28:03.081244 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:28:03.081255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:28:03.081271 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.081283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:28:03.081294 kernel: random: crng init done Apr 30 03:28:03.081305 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:28:03.081316 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:28:03.081327 kernel: Fallback order for Node 0: 0 Apr 30 03:28:03.081338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 30 03:28:03.081349 kernel: Policy zone: DMA32 Apr 30 03:28:03.081364 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:28:03.081376 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136900K reserved, 0K cma-reserved) Apr 30 03:28:03.081387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 03:28:03.081397 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:28:03.081409 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:28:03.081419 kernel: Dynamic Preempt: voluntary Apr 30 03:28:03.081430 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:28:03.081442 kernel: rcu: RCU event tracing is enabled. Apr 30 03:28:03.081453 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 03:28:03.081468 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:28:03.081479 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:28:03.081491 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:28:03.081501 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:28:03.081516 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 03:28:03.081527 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 03:28:03.081538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:28:03.081549 kernel: Console: colour VGA+ 80x25 Apr 30 03:28:03.081560 kernel: printk: console [ttyS0] enabled Apr 30 03:28:03.081575 kernel: ACPI: Core revision 20230628 Apr 30 03:28:03.081586 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:28:03.081598 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:28:03.081608 kernel: x2apic enabled Apr 30 03:28:03.081619 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:28:03.081630 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 03:28:03.081642 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 03:28:03.081653 kernel: kvm-guest: setup PV IPIs Apr 30 03:28:03.081679 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:28:03.081691 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:28:03.081743 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 03:28:03.081756 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:28:03.081772 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:28:03.081784 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:28:03.081795 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:28:03.081812 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:28:03.081824 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:28:03.081840 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:28:03.081852 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:28:03.081868 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:28:03.081880 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:28:03.081907 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:28:03.081918 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 03:28:03.081929 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 03:28:03.081940 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 03:28:03.081956 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:28:03.081966 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:28:03.081977 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:28:03.081986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:28:03.081996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:28:03.082006 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:28:03.082017 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:28:03.082027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:28:03.082037 kernel: landlock: Up and running. Apr 30 03:28:03.082052 kernel: SELinux: Initializing. Apr 30 03:28:03.082063 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:28:03.082074 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:28:03.082084 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:28:03.082095 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:28:03.082106 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:28:03.082116 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:28:03.082127 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:28:03.082142 kernel: ... version: 0 Apr 30 03:28:03.082157 kernel: ... bit width: 48 Apr 30 03:28:03.082168 kernel: ... generic registers: 6 Apr 30 03:28:03.082177 kernel: ... value mask: 0000ffffffffffff Apr 30 03:28:03.082187 kernel: ... max period: 00007fffffffffff Apr 30 03:28:03.082197 kernel: ... fixed-purpose events: 0 Apr 30 03:28:03.082207 kernel: ... event mask: 000000000000003f Apr 30 03:28:03.082217 kernel: signal: max sigframe size: 1776 Apr 30 03:28:03.082228 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:28:03.082239 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:28:03.082254 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:28:03.082265 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:28:03.082275 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 03:28:03.082285 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 03:28:03.082295 kernel: smpboot: Max logical packages: 1 Apr 30 03:28:03.082305 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 03:28:03.082315 kernel: devtmpfs: initialized Apr 30 03:28:03.082325 kernel: x86/mm: Memory block size: 128MB Apr 30 03:28:03.082336 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:28:03.082351 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 03:28:03.082361 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:28:03.082371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:28:03.082382 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:28:03.082394 kernel: audit: type=2000 audit(1745983681.734:1): state=initialized audit_enabled=0 res=1 Apr 30 03:28:03.082405 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:28:03.082415 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:28:03.082427 kernel: cpuidle: using governor menu Apr 30 03:28:03.082439 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:28:03.082456 kernel: dca service started, version 1.12.1 Apr 30 03:28:03.082467 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:28:03.082479 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 03:28:03.082490 kernel: PCI: Using configuration type 1 for base access Apr 30 03:28:03.082501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:28:03.082512 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:28:03.082522 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:28:03.082531 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:28:03.082541 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:28:03.082555 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:28:03.082566 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:28:03.082577 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:28:03.082589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:28:03.082601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:28:03.082612 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:28:03.082623 kernel: ACPI: Interpreter enabled Apr 30 03:28:03.082635 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:28:03.082646 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:28:03.082661 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:28:03.082672 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:28:03.082683 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:28:03.082693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:28:03.083019 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:28:03.083207 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:28:03.083378 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:28:03.083395 kernel: PCI host bridge to bus 0000:00 Apr 30 03:28:03.083589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:28:03.083760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:28:03.083935 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:28:03.084094 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 30 03:28:03.084250 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:28:03.084405 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:28:03.086029 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:28:03.086253 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:28:03.086448 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 03:28:03.086621 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 30 03:28:03.086803 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 30 03:28:03.086996 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 30 03:28:03.087166 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:28:03.087370 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:28:03.087543 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 30 03:28:03.087729 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 30 03:28:03.087929 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 30 03:28:03.088151 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:28:03.089852 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 03:28:03.090051 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 30 03:28:03.090232 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 30 03:28:03.090446 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:28:03.090620 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 30 03:28:03.090814 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 30 03:28:03.091129 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 30 03:28:03.091302 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 30 03:28:03.091493 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:28:03.091668 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:28:03.091868 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:28:03.092061 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 30 03:28:03.092230 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 30 03:28:03.092417 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:28:03.092593 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:28:03.092618 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:28:03.092630 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:28:03.092642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:28:03.092654 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:28:03.092665 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:28:03.092677 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:28:03.092689 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:28:03.092714 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:28:03.092726 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:28:03.092742 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:28:03.092754 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:28:03.092766 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:28:03.092777 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:28:03.092789 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:28:03.092801 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:28:03.092813 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:28:03.092825 kernel: iommu: Default domain type: Translated Apr 30 03:28:03.092837 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:28:03.092852 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:28:03.092864 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:28:03.092876 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:28:03.092888 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 30 03:28:03.093142 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:28:03.093309 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:28:03.093474 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:28:03.093490 kernel: vgaarb: loaded Apr 30 03:28:03.093508 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:28:03.093519 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:28:03.093530 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:28:03.093541 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:28:03.093552 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:28:03.093563 kernel: pnp: PnP ACPI init Apr 30 03:28:03.093777 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:28:03.093797 kernel: pnp: PnP ACPI: found 6 devices Apr 30 03:28:03.093816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:28:03.093828 kernel: NET: Registered PF_INET protocol family Apr 30 03:28:03.093840 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:28:03.093852 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 03:28:03.093864 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:28:03.093876 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:28:03.093888 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 03:28:03.093917 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 03:28:03.093929 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:28:03.093945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:28:03.093958 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:28:03.093970 kernel: NET: Registered PF_XDP protocol family Apr 30 03:28:03.094134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:28:03.094291 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:28:03.094445 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:28:03.094597 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 30 03:28:03.094763 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:28:03.094956 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:28:03.096502 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:28:03.096514 kernel: Initialise system trusted keyrings Apr 30 03:28:03.096526 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 03:28:03.096538 kernel: Key type asymmetric registered Apr 30 03:28:03.096550 kernel: Asymmetric key parser 'x509' registered Apr 30 03:28:03.096562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:28:03.096573 kernel: io scheduler mq-deadline registered Apr 30 03:28:03.096585 kernel: io scheduler kyber registered Apr 30 03:28:03.096597 kernel: io scheduler bfq registered Apr 30 03:28:03.096613 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:28:03.096628 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:28:03.096642 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:28:03.096653 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 03:28:03.096665 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:28:03.096677 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:28:03.096689 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:28:03.096710 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:28:03.096722 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:28:03.096983 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 03:28:03.097003 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:28:03.097160 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 03:28:03.097317 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T03:28:02 UTC (1745983682) Apr 30 03:28:03.097472 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:28:03.097489 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:28:03.097501 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:28:03.097519 kernel: Segment Routing with IPv6 Apr 30 03:28:03.097531 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:28:03.097543 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:28:03.097555 kernel: Key type dns_resolver registered Apr 30 03:28:03.097567 kernel: IPI shorthand broadcast: enabled Apr 30 03:28:03.097579 kernel: sched_clock: Marking stable (938002899, 114940810)->(1076798726, -23855017) Apr 30 03:28:03.097591 kernel: registered taskstats version 1 Apr 30 03:28:03.097602 kernel: Loading compiled-in X.509 certificates Apr 30 03:28:03.097613 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:28:03.097625 kernel: Key type .fscrypt registered Apr 30 03:28:03.097641 kernel: Key type fscrypt-provisioning registered Apr 30 03:28:03.097653 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:28:03.097665 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:28:03.097677 kernel: ima: No architecture policies found Apr 30 03:28:03.097689 kernel: clk: Disabling unused clocks Apr 30 03:28:03.097711 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:28:03.097722 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:28:03.097734 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:28:03.097750 kernel: Run /init as init process Apr 30 03:28:03.097761 kernel: with arguments: Apr 30 03:28:03.097773 kernel: /init Apr 30 03:28:03.097785 kernel: with environment: Apr 30 03:28:03.097797 kernel: HOME=/ Apr 30 03:28:03.097808 kernel: TERM=linux Apr 30 03:28:03.097820 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:28:03.097834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:03.097854 systemd[1]: Detected virtualization kvm. Apr 30 03:28:03.097867 systemd[1]: Detected architecture x86-64. Apr 30 03:28:03.097879 systemd[1]: Running in initrd. Apr 30 03:28:03.097892 systemd[1]: No hostname configured, using default hostname. Apr 30 03:28:03.097921 systemd[1]: Hostname set to . Apr 30 03:28:03.097933 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:03.097945 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:28:03.097957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:03.097974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:03.097988 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:28:03.098016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:03.098033 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:28:03.098046 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:28:03.098065 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:28:03.098079 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:28:03.098092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:03.098106 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:03.098119 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:03.098132 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:03.098145 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:03.098157 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:03.098174 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:03.098187 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:03.098201 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:28:03.098214 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:28:03.098226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:03.098240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:03.098252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:03.098265 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:03.098278 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:28:03.098295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:03.098308 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:28:03.098321 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:28:03.098334 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:03.098347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:03.098359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:03.098372 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:03.098385 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:03.098402 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:28:03.098446 systemd-journald[193]: Collecting audit messages is disabled. Apr 30 03:28:03.098484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:28:03.098501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:03.098514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:03.098528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:28:03.098545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:03.098559 systemd-journald[193]: Journal started Apr 30 03:28:03.098586 systemd-journald[193]: Runtime Journal (/run/log/journal/262de10664a7420ea296e197a17ab692) is 6.0M, max 48.4M, 42.3M free. Apr 30 03:28:03.079518 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 03:28:03.101840 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:03.108946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:28:03.110468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:03.114156 kernel: Bridge firewalling registered Apr 30 03:28:03.114138 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 03:28:03.116288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:03.129141 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:28:03.132156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:03.135663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:03.137348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:03.148635 dracut-cmdline[220]: dracut-dracut-053 Apr 30 03:28:03.152297 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:28:03.152361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:03.161048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:03.168212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:03.202242 systemd-resolved[249]: Positive Trust Anchors: Apr 30 03:28:03.202263 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:03.202295 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:03.205140 systemd-resolved[249]: Defaulting to hostname 'linux'. Apr 30 03:28:03.206739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:03.217308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:03.274975 kernel: SCSI subsystem initialized Apr 30 03:28:03.283935 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:28:03.295924 kernel: iscsi: registered transport (tcp) Apr 30 03:28:03.318933 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:28:03.318991 kernel: QLogic iSCSI HBA Driver Apr 30 03:28:03.372825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:03.397053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:28:03.422925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:28:03.422999 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:28:03.423016 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:28:03.467935 kernel: raid6: avx2x4 gen() 29972 MB/s Apr 30 03:28:03.484919 kernel: raid6: avx2x2 gen() 30730 MB/s Apr 30 03:28:03.502041 kernel: raid6: avx2x1 gen() 25511 MB/s Apr 30 03:28:03.502060 kernel: raid6: using algorithm avx2x2 gen() 30730 MB/s Apr 30 03:28:03.520057 kernel: raid6: .... xor() 19831 MB/s, rmw enabled Apr 30 03:28:03.520083 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:28:03.540937 kernel: xor: automatically using best checksumming function avx Apr 30 03:28:03.716934 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:28:03.733034 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:03.807164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:03.819045 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 30 03:28:03.823731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:03.857858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:28:03.879347 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 30 03:28:03.922557 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:03.933196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:04.010007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:04.049496 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 03:28:04.057481 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 03:28:04.057636 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:28:04.057647 kernel: GPT:9289727 != 19775487 Apr 30 03:28:04.057657 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:28:04.057667 kernel: GPT:9289727 != 19775487 Apr 30 03:28:04.057677 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:28:04.057702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:04.060371 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:28:04.063922 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:28:04.082915 kernel: libata version 3.00 loaded. Apr 30 03:28:04.084830 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:04.102838 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:04.109214 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:28:04.109232 kernel: AES CTR mode by8 optimization enabled Apr 30 03:28:04.105573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:04.111284 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:28:04.145173 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:28:04.145190 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:28:04.145350 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (465) Apr 30 03:28:04.145362 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:28:04.145507 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Apr 30 03:28:04.145519 kernel: scsi host0: ahci Apr 30 03:28:04.145699 kernel: scsi host1: ahci Apr 30 03:28:04.145862 kernel: scsi host2: ahci Apr 30 03:28:04.147601 kernel: scsi host3: ahci Apr 30 03:28:04.147871 kernel: scsi host4: ahci Apr 30 03:28:04.148048 kernel: scsi host5: ahci Apr 30 03:28:04.148222 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 30 03:28:04.148234 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 30 03:28:04.148245 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 30 03:28:04.148260 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 30 03:28:04.148271 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 30 03:28:04.148281 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 30 03:28:04.109076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:04.114217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:04.114289 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:04.120194 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:04.128649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:28:04.129871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:04.129975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:04.136923 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:04.140301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:04.146286 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:04.156526 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:28:04.170814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:28:04.199862 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:28:04.201179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:28:04.203968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:04.212312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:28:04.223044 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:28:04.225020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:28:04.248285 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:04.459307 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:28:04.459404 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:28:04.459419 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:28:04.460936 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:28:04.461052 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 03:28:04.461930 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:28:04.463219 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:28:04.463243 kernel: ata3.00: applying bridge limits Apr 30 03:28:04.463937 kernel: ata3.00: configured for UDMA/100 Apr 30 03:28:04.464925 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:28:04.489498 disk-uuid[556]: Primary Header is updated. Apr 30 03:28:04.489498 disk-uuid[556]: Secondary Entries is updated. Apr 30 03:28:04.489498 disk-uuid[556]: Secondary Header is updated. Apr 30 03:28:04.493919 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:04.498917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:04.513997 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:28:04.535461 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:28:04.535479 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:28:05.518930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:28:05.519323 disk-uuid[577]: The operation has completed successfully. Apr 30 03:28:05.551445 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:28:05.551579 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:28:05.585073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:28:05.595966 sh[593]: Success Apr 30 03:28:05.608919 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:28:05.641726 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:28:05.652495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:28:05.657369 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:28:05.674944 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:28:05.674977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.674990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:28:05.676020 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:28:05.677400 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:28:05.682691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:28:05.701175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:28:05.714058 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:28:05.728792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:28:05.735538 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.735572 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:05.735584 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:05.738927 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:05.749714 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:28:05.766964 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:05.847510 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:05.872059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:05.898503 systemd-networkd[771]: lo: Link UP Apr 30 03:28:05.898517 systemd-networkd[771]: lo: Gained carrier Apr 30 03:28:05.900405 systemd-networkd[771]: Enumeration completed Apr 30 03:28:05.900819 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:05.900824 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:05.905097 systemd-networkd[771]: eth0: Link UP Apr 30 03:28:05.905101 systemd-networkd[771]: eth0: Gained carrier Apr 30 03:28:05.905108 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:05.905238 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:05.916355 systemd[1]: Reached target network.target - Network. Apr 30 03:28:05.927943 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:28:05.986128 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:28:06.012075 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:28:06.224601 ignition[776]: Ignition 2.19.0 Apr 30 03:28:06.224614 ignition[776]: Stage: fetch-offline Apr 30 03:28:06.224689 ignition[776]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.224702 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:06.224850 ignition[776]: parsed url from cmdline: "" Apr 30 03:28:06.224854 ignition[776]: no config URL provided Apr 30 03:28:06.224860 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:28:06.224872 ignition[776]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:28:06.224921 ignition[776]: op(1): [started] loading QEMU firmware config module Apr 30 03:28:06.224926 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 03:28:06.234268 ignition[776]: op(1): [finished] loading QEMU firmware config module Apr 30 03:28:06.273422 ignition[776]: parsing config with SHA512: cee1a0f79737a855ca24a019d5365901d90bda1c6f791d3db22b5e0d6da8092bc1e355c72f5fcae1a9df69960b086e4c48ed165acd0df12ff47ead54b78391dc Apr 30 03:28:06.412657 unknown[776]: fetched base config from "system" Apr 30 03:28:06.412685 unknown[776]: fetched user config from "qemu" Apr 30 03:28:06.420748 ignition[776]: fetch-offline: fetch-offline passed Apr 30 03:28:06.421797 ignition[776]: Ignition finished successfully Apr 30 03:28:06.424736 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:06.427521 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:28:06.439047 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:28:06.461765 ignition[785]: Ignition 2.19.0 Apr 30 03:28:06.461777 ignition[785]: Stage: kargs Apr 30 03:28:06.462065 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.462082 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:06.466241 ignition[785]: kargs: kargs passed Apr 30 03:28:06.466294 ignition[785]: Ignition finished successfully Apr 30 03:28:06.470876 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:28:06.483090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:28:06.543176 ignition[792]: Ignition 2.19.0 Apr 30 03:28:06.543189 ignition[792]: Stage: disks Apr 30 03:28:06.543396 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:06.543412 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:06.548453 ignition[792]: disks: disks passed Apr 30 03:28:06.548518 ignition[792]: Ignition finished successfully Apr 30 03:28:06.552129 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:28:06.553504 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:06.555506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:28:06.557827 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:06.558269 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:06.558655 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:06.574065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:28:06.589128 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:28:06.774608 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:28:06.793010 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:28:06.894926 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:28:06.895404 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:28:06.896171 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:06.908983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:06.911002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:28:06.913536 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:28:06.918595 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Apr 30 03:28:06.913579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:28:06.925541 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:06.925564 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:06.925575 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:06.925586 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:06.913602 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:06.922704 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:28:06.926721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:06.929807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:28:06.973477 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:28:06.979718 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:28:06.984945 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:28:06.990460 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:28:07.157397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:07.171038 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:28:07.172933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:28:07.179870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:28:07.181138 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.285229 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:28:07.287988 ignition[925]: INFO : Ignition 2.19.0 Apr 30 03:28:07.287988 ignition[925]: INFO : Stage: mount Apr 30 03:28:07.287988 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.290622 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:07.292366 ignition[925]: INFO : mount: mount passed Apr 30 03:28:07.293172 ignition[925]: INFO : Ignition finished successfully Apr 30 03:28:07.296207 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:28:07.307998 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:28:07.620286 systemd-networkd[771]: eth0: Gained IPv6LL Apr 30 03:28:07.909124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:28:07.917579 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Apr 30 03:28:07.917633 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:28:07.917659 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:28:07.919172 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:28:07.921931 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:28:07.923846 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:28:07.976099 ignition[956]: INFO : Ignition 2.19.0 Apr 30 03:28:07.976099 ignition[956]: INFO : Stage: files Apr 30 03:28:07.978006 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:07.978006 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:07.978006 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:28:07.981914 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:28:07.981914 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:28:07.981914 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:28:07.981914 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:28:07.987872 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:28:07.987872 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:07.987872 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:28:07.982135 unknown[956]: wrote ssh authorized keys file for user: core Apr 30 03:28:08.039867 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:28:08.293242 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:28:08.293242 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:08.297719 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:08.310530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:28:08.669252 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 03:28:09.065084 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:28:09.065084 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 30 03:28:09.069432 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 03:28:09.095351 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:28:09.101546 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:28:09.103301 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 03:28:09.103301 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:09.103301 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:28:09.103301 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:09.103301 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:28:09.103301 ignition[956]: INFO : files: files passed Apr 30 03:28:09.103301 ignition[956]: INFO : Ignition finished successfully Apr 30 03:28:09.114924 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:28:09.128043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:28:09.129022 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:28:09.136801 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:28:09.136978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:28:09.142920 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 03:28:09.147983 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:09.149786 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:09.151369 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:28:09.154122 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:09.157284 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:28:09.168091 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:28:09.199403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:28:09.199556 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:28:09.202627 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:28:09.203254 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:28:09.203686 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:28:09.204646 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:28:09.226608 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:09.238226 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:28:09.250258 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:09.250461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:09.253963 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:28:09.255094 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:28:09.255236 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:28:09.258938 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:28:09.260127 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:28:09.260438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:28:09.260781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:28:09.261296 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:28:09.261645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:28:09.262157 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:28:09.262502 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:28:09.262848 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:28:09.263351 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:28:09.263682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:28:09.263810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:28:09.279355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:09.283834 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:09.283948 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:28:09.284290 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:09.287472 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:28:09.287608 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:28:09.291974 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:28:09.292121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:28:09.293180 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:28:09.296281 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:28:09.296450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:09.297575 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:28:09.297946 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:28:09.298299 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:28:09.298406 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:28:09.304447 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:28:09.304576 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:28:09.306530 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:28:09.306664 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:28:09.308378 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:28:09.308484 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:28:09.341068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:28:09.342218 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:28:09.342344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:09.345762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:28:09.347112 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:28:09.347235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:09.350404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:28:09.350749 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:28:09.363352 ignition[1012]: INFO : Ignition 2.19.0 Apr 30 03:28:09.363352 ignition[1012]: INFO : Stage: umount Apr 30 03:28:09.363352 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:28:09.363352 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:28:09.363352 ignition[1012]: INFO : umount: umount passed Apr 30 03:28:09.363352 ignition[1012]: INFO : Ignition finished successfully Apr 30 03:28:09.361615 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:28:09.361742 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:28:09.370299 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:28:09.370479 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:28:09.372459 systemd[1]: Stopped target network.target - Network. Apr 30 03:28:09.374189 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:28:09.374259 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:28:09.376654 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:28:09.376719 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:28:09.378735 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:28:09.378785 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:28:09.380761 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:28:09.380814 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:28:09.382875 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:28:09.384920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:28:09.387851 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:28:09.391945 systemd-networkd[771]: eth0: DHCPv6 lease lost Apr 30 03:28:09.394346 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:28:09.394505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:28:09.396878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:28:09.396948 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:09.407024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:28:09.407107 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:28:09.407163 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:28:09.407596 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:09.408061 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:28:09.408184 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:28:09.413333 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:28:09.413387 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:09.415622 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:28:09.415675 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:09.418406 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:28:09.418496 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:09.422694 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:28:09.422847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:28:09.427865 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:28:09.428076 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:09.429737 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:28:09.429789 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:09.431958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:28:09.432003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:09.434222 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:28:09.434272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:28:09.436786 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:28:09.436836 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:28:09.438948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:28:09.438999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:28:09.456075 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:28:09.457413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:28:09.457490 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:09.459852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:28:09.459923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:09.464531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:28:09.464670 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:28:10.060644 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:28:10.060808 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:28:10.063132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:28:10.064881 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:28:10.064977 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:28:10.072079 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:28:10.080538 systemd[1]: Switching root. Apr 30 03:28:10.111049 systemd-journald[193]: Journal stopped Apr 30 03:28:11.906398 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 30 03:28:11.906488 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:28:11.906527 kernel: SELinux: policy capability open_perms=1 Apr 30 03:28:11.906546 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:28:11.906573 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:28:11.906595 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:28:11.906611 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:28:11.906625 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:28:11.906639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:28:11.906653 kernel: audit: type=1403 audit(1745983691.078:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:28:11.906668 systemd[1]: Successfully loaded SELinux policy in 39.433ms. Apr 30 03:28:11.906691 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.903ms. Apr 30 03:28:11.906711 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:28:11.906726 systemd[1]: Detected virtualization kvm. Apr 30 03:28:11.906741 systemd[1]: Detected architecture x86-64. Apr 30 03:28:11.906756 systemd[1]: Detected first boot. Apr 30 03:28:11.906771 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:28:11.906786 zram_generator::config[1057]: No configuration found. Apr 30 03:28:11.906804 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:28:11.906821 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:28:11.906837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:28:11.906858 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:11.906877 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:28:11.906911 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:28:11.906940 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:28:11.906956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:28:11.906973 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:28:11.906990 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:28:11.907014 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:28:11.907033 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:28:11.907047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:28:11.907063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:28:11.907084 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:28:11.907098 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:28:11.907113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:28:11.907129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:28:11.907145 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:28:11.907162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:28:11.907182 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:28:11.907198 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:28:11.907214 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:28:11.907231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:28:11.907247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:28:11.907263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:28:11.907280 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:28:11.907304 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:28:11.907327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:28:11.907344 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:28:11.907361 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:28:11.907379 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:28:11.907396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:28:11.907412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:28:11.907428 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:28:11.907450 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:28:11.907467 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:28:11.907488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:11.907514 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:28:11.907532 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:28:11.907548 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:28:11.907566 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:28:11.907582 systemd[1]: Reached target machines.target - Containers. Apr 30 03:28:11.907599 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:28:11.907616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:11.907638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:28:11.907655 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:28:11.907672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:11.907691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:11.907707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:11.907723 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:28:11.907739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:11.907756 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:28:11.907773 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:28:11.907794 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:28:11.907811 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:28:11.907828 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:28:11.907843 kernel: fuse: init (API version 7.39) Apr 30 03:28:11.907860 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:28:11.907876 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:28:11.907922 kernel: loop: module loaded Apr 30 03:28:11.907942 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:28:11.907959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:28:11.908005 systemd-journald[1127]: Collecting audit messages is disabled. Apr 30 03:28:11.908035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:28:11.908052 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:28:11.908068 systemd-journald[1127]: Journal started Apr 30 03:28:11.908096 systemd-journald[1127]: Runtime Journal (/run/log/journal/262de10664a7420ea296e197a17ab692) is 6.0M, max 48.4M, 42.3M free. Apr 30 03:28:11.657347 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:28:11.672323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:28:11.672886 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:28:11.909337 systemd[1]: Stopped verity-setup.service. Apr 30 03:28:11.918589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:11.918733 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:28:11.921025 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:28:11.922663 kernel: ACPI: bus type drm_connector registered Apr 30 03:28:11.922591 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:28:11.924019 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:28:11.925201 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:28:11.926748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:28:11.928200 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:28:11.929627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:28:11.931414 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:28:11.931664 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:28:11.933298 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:28:11.934822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:11.935034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:11.936526 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:11.936701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:11.938307 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:11.938484 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:11.940076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:28:11.940251 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:28:11.941682 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:11.941865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:11.943303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:28:11.944769 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:28:11.946347 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:28:11.960523 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:28:11.975022 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:28:11.977717 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:28:11.979281 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:28:11.979328 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:28:11.982173 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:28:11.985250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:28:11.988056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:28:11.989670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:11.991918 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:28:11.999147 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:28:12.000705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:12.002536 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:28:12.004143 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:12.007479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:28:12.021860 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:28:12.023654 systemd-journald[1127]: Time spent on flushing to /var/log/journal/262de10664a7420ea296e197a17ab692 is 15.962ms for 949 entries. Apr 30 03:28:12.023654 systemd-journald[1127]: System Journal (/var/log/journal/262de10664a7420ea296e197a17ab692) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:28:12.061162 systemd-journald[1127]: Received client request to flush runtime journal. Apr 30 03:28:12.061204 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:28:12.030102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:28:12.033260 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:28:12.034773 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:28:12.036644 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:28:12.038335 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:28:12.047069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:28:12.048833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:28:12.051161 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:28:12.061863 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:28:12.071241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:28:12.073330 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:28:12.074674 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:28:12.082563 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:28:12.095182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:28:12.098697 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:28:12.112608 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:28:12.113639 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:28:12.115959 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:28:12.128300 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 30 03:28:12.128324 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 30 03:28:12.135711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:28:12.148952 kernel: loop2: detected capacity change from 0 to 210664 Apr 30 03:28:12.182218 kernel: loop3: detected capacity change from 0 to 140768 Apr 30 03:28:12.196920 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:28:12.207922 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 03:28:12.215573 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 03:28:12.216620 (sd-merge)[1196]: Merged extensions into '/usr'. Apr 30 03:28:12.221431 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:28:12.221460 systemd[1]: Reloading... Apr 30 03:28:12.274933 zram_generator::config[1222]: No configuration found. Apr 30 03:28:12.358583 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:28:12.409114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:12.459709 systemd[1]: Reloading finished in 237 ms. Apr 30 03:28:12.494112 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:28:12.495970 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:28:12.515206 systemd[1]: Starting ensure-sysext.service... Apr 30 03:28:12.520084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:28:12.523966 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:28:12.523982 systemd[1]: Reloading... Apr 30 03:28:12.544457 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:28:12.544842 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:28:12.545913 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:28:12.546233 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 30 03:28:12.546322 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 30 03:28:12.549811 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:12.549825 systemd-tmpfiles[1260]: Skipping /boot Apr 30 03:28:12.564592 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:28:12.565535 systemd-tmpfiles[1260]: Skipping /boot Apr 30 03:28:12.582930 zram_generator::config[1293]: No configuration found. Apr 30 03:28:12.687810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:12.737856 systemd[1]: Reloading finished in 213 ms. Apr 30 03:28:12.758565 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:28:12.772683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:28:12.782615 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:12.785267 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:28:12.787853 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:28:12.792038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:28:12.797784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:28:12.803188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:28:12.807314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.807500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:12.810996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:12.813947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:12.820090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:12.822248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:12.825664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:28:12.826861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.829030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:12.829234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:12.831531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:28:12.834512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:12.834797 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:12.835477 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Apr 30 03:28:12.839460 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:12.839678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:12.850099 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:28:12.855238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.855456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:28:12.861255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:28:12.863806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:28:12.867166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:28:12.868134 augenrules[1357]: No rules Apr 30 03:28:12.874444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:28:12.876259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:28:12.878069 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:28:12.879638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:28:12.880754 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:28:12.885354 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:28:12.893194 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:12.895179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:28:12.895670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:28:12.898337 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:28:12.898564 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:28:12.902147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:28:12.902339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:28:12.904392 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:28:12.904590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:28:12.919949 systemd[1]: Finished ensure-sysext.service. Apr 30 03:28:12.921578 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:28:12.923553 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:28:12.943915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1392) Apr 30 03:28:12.948072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:28:12.949271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:28:12.949344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:28:12.952203 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:28:12.953378 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:28:12.958771 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:28:12.982262 systemd-resolved[1330]: Positive Trust Anchors: Apr 30 03:28:12.982283 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:28:12.982317 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:28:12.986291 systemd-resolved[1330]: Defaulting to hostname 'linux'. Apr 30 03:28:12.988170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:28:12.989737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:28:13.008311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:28:13.017141 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:28:13.020025 systemd-networkd[1398]: lo: Link UP Apr 30 03:28:13.020033 systemd-networkd[1398]: lo: Gained carrier Apr 30 03:28:13.021767 systemd-networkd[1398]: Enumeration completed Apr 30 03:28:13.021854 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:28:13.023915 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:13.023920 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:28:13.024185 systemd[1]: Reached target network.target - Network. Apr 30 03:28:13.026457 systemd-networkd[1398]: eth0: Link UP Apr 30 03:28:13.026470 systemd-networkd[1398]: eth0: Gained carrier Apr 30 03:28:13.026491 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:28:13.032457 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:28:13.034103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:28:13.038964 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:28:13.054583 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:28:13.054970 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:28:13.055213 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:28:13.061619 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 30 03:28:13.061668 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:28:13.063314 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:28:13.065174 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:28:13.066675 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 03:28:13.066749 systemd-timesyncd[1400]: Initial clock synchronization to Wed 2025-04-30 03:28:13.156454 UTC. Apr 30 03:28:13.084050 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:28:13.098148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:28:13.173207 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:28:13.185112 kernel: kvm_amd: TSC scaling supported Apr 30 03:28:13.185153 kernel: kvm_amd: Nested Virtualization enabled Apr 30 03:28:13.185190 kernel: kvm_amd: Nested Paging enabled Apr 30 03:28:13.186470 kernel: kvm_amd: LBR virtualization supported Apr 30 03:28:13.186502 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 03:28:13.187158 kernel: kvm_amd: Virtual GIF supported Apr 30 03:28:13.207915 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:28:13.242537 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:28:13.269207 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:28:13.271009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:28:13.279087 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:13.313568 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:28:13.315323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:28:13.316535 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:28:13.317769 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:28:13.319074 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:28:13.320571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:28:13.321818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:28:13.323099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:28:13.324369 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:28:13.324396 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:28:13.325328 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:28:13.327070 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:28:13.329942 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:28:13.340865 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:28:13.343323 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:28:13.344989 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:28:13.346184 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:28:13.347210 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:28:13.348231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:13.348260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:28:13.349654 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:28:13.351919 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:28:13.354065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:28:13.357148 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:28:13.360137 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:28:13.361653 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:28:13.365952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:28:13.370498 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:28:13.372924 jq[1431]: false Apr 30 03:28:13.376104 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:28:13.380140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:28:13.391628 dbus-daemon[1430]: [system] SELinux support is enabled Apr 30 03:28:13.392188 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:28:13.392541 extend-filesystems[1432]: Found loop3 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found loop4 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found loop5 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found sr0 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda1 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda2 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda3 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found usr Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda4 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda6 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda7 Apr 30 03:28:13.392541 extend-filesystems[1432]: Found vda9 Apr 30 03:28:13.392541 extend-filesystems[1432]: Checking size of /dev/vda9 Apr 30 03:28:13.419035 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 03:28:13.419189 extend-filesystems[1432]: Resized partition /dev/vda9 Apr 30 03:28:13.393942 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:28:13.422491 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:28:13.437011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Apr 30 03:28:13.396534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:28:13.401061 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:28:13.437298 update_engine[1446]: I20250430 03:28:13.434036 1446 main.cc:92] Flatcar Update Engine starting Apr 30 03:28:13.407009 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:28:13.437631 jq[1451]: true Apr 30 03:28:13.410573 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:28:13.450002 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 03:28:13.468292 update_engine[1446]: I20250430 03:28:13.438072 1446 update_check_scheduler.cc:74] Next update check in 5m7s Apr 30 03:28:13.414838 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:28:13.468523 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:28:13.468523 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:28:13.468523 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 03:28:13.423622 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:28:13.475273 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Apr 30 03:28:13.423866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:28:13.424329 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:28:13.476687 jq[1457]: true Apr 30 03:28:13.424552 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:28:13.431843 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:28:13.432135 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:28:13.459458 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:28:13.469578 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:28:13.469821 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:28:13.482947 systemd-logind[1443]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:28:13.482984 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:28:13.484845 systemd-logind[1443]: New seat seat0. Apr 30 03:28:13.494174 tar[1455]: linux-amd64/helm Apr 30 03:28:13.494023 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:28:13.496604 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:28:13.498202 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:28:13.498233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:28:13.499726 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:28:13.499747 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:28:13.510431 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:28:13.541792 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:28:13.543694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:28:13.545764 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:28:13.546779 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:28:13.647778 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:28:13.668947 containerd[1458]: time="2025-04-30T03:28:13.666071949Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:28:13.674644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:28:13.683165 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:28:13.692815 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:28:13.693103 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:28:13.698911 containerd[1458]: time="2025-04-30T03:28:13.697604335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.699368 containerd[1458]: time="2025-04-30T03:28:13.699314823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:13.699416 containerd[1458]: time="2025-04-30T03:28:13.699369806Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:28:13.699416 containerd[1458]: time="2025-04-30T03:28:13.699391918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:28:13.699623 containerd[1458]: time="2025-04-30T03:28:13.699600228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:28:13.699652 containerd[1458]: time="2025-04-30T03:28:13.699623111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.699713 containerd[1458]: time="2025-04-30T03:28:13.699692832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:13.699713 containerd[1458]: time="2025-04-30T03:28:13.699710014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.699971 containerd[1458]: time="2025-04-30T03:28:13.699942400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700000 containerd[1458]: time="2025-04-30T03:28:13.699969932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700000 containerd[1458]: time="2025-04-30T03:28:13.699988086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700044 containerd[1458]: time="2025-04-30T03:28:13.699998736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700120 containerd[1458]: time="2025-04-30T03:28:13.700098814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700373 containerd[1458]: time="2025-04-30T03:28:13.700350816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700509 containerd[1458]: time="2025-04-30T03:28:13.700486912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:28:13.700509 containerd[1458]: time="2025-04-30T03:28:13.700505807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:28:13.700626 containerd[1458]: time="2025-04-30T03:28:13.700607518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:28:13.700686 containerd[1458]: time="2025-04-30T03:28:13.700669284Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:28:13.702240 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.706941529Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707001150Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707018994Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707034062Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707048559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707184174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707391402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707508803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707529612Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707544480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707558666Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707571350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707587100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.708759 containerd[1458]: time="2025-04-30T03:28:13.707600455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707615342Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707627335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707639037Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707650769Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707670756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707683611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707698519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707710481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707722433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707734266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707745346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707757609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707770263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709050 containerd[1458]: time="2025-04-30T03:28:13.707786634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707798426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707809737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707821670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707845685Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707866724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707878496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707889016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707949499Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707966441Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707981539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.707993121Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.708003621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.708021013Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:28:13.709289 containerd[1458]: time="2025-04-30T03:28:13.708037524Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:28:13.709546 containerd[1458]: time="2025-04-30T03:28:13.708048926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:28:13.709566 containerd[1458]: time="2025-04-30T03:28:13.708322820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:28:13.709566 containerd[1458]: time="2025-04-30T03:28:13.708379015Z" level=info msg="Connect containerd service" Apr 30 03:28:13.709566 containerd[1458]: time="2025-04-30T03:28:13.708420423Z" level=info msg="using legacy CRI server" Apr 30 03:28:13.709566 containerd[1458]: time="2025-04-30T03:28:13.708426975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:28:13.709566 containerd[1458]: time="2025-04-30T03:28:13.708542512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:28:13.712448 containerd[1458]: time="2025-04-30T03:28:13.712419444Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:28:13.712773 containerd[1458]: time="2025-04-30T03:28:13.712710339Z" level=info msg="Start subscribing containerd event" Apr 30 03:28:13.712857 containerd[1458]: time="2025-04-30T03:28:13.712840123Z" level=info msg="Start recovering state" Apr 30 03:28:13.713019 containerd[1458]: time="2025-04-30T03:28:13.712999151Z" level=info msg="Start event monitor" Apr 30 03:28:13.713108 containerd[1458]: time="2025-04-30T03:28:13.713088819Z" level=info msg="Start snapshots syncer" Apr 30 03:28:13.713171 containerd[1458]: time="2025-04-30T03:28:13.713156616Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:28:13.713244 containerd[1458]: time="2025-04-30T03:28:13.713227940Z" level=info msg="Start streaming server" Apr 30 03:28:13.713576 containerd[1458]: time="2025-04-30T03:28:13.713519507Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:28:13.713622 containerd[1458]: time="2025-04-30T03:28:13.713606681Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:28:13.713688 containerd[1458]: time="2025-04-30T03:28:13.713670761Z" level=info msg="containerd successfully booted in 0.048974s" Apr 30 03:28:13.713818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:28:13.715390 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:28:13.723211 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:28:13.725662 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:28:13.727033 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:28:13.878966 tar[1455]: linux-amd64/LICENSE Apr 30 03:28:13.879082 tar[1455]: linux-amd64/README.md Apr 30 03:28:13.895607 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:28:14.148887 systemd-networkd[1398]: eth0: Gained IPv6LL Apr 30 03:28:14.153205 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:28:14.155179 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:28:14.167254 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 03:28:14.170502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:14.173318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:28:14.195465 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 03:28:14.195763 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 03:28:14.197514 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:28:14.200652 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:28:14.840669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:14.842716 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:28:14.845705 systemd[1]: Startup finished in 1.120s (kernel) + 8.285s (initrd) + 3.804s (userspace) = 13.210s. Apr 30 03:28:14.872600 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:15.353086 kubelet[1542]: E0430 03:28:15.352891 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:15.358381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:15.358713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:15.359241 systemd[1]: kubelet.service: Consumed 1.005s CPU time. Apr 30 03:28:16.568721 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:28:16.570611 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Apr 30 03:28:16.625471 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:16.627620 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:16.637741 systemd-logind[1443]: New session 1 of user core. Apr 30 03:28:16.639200 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:28:16.661444 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:28:16.676388 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:28:16.686210 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:28:16.689488 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:28:16.809533 systemd[1560]: Queued start job for default target default.target. Apr 30 03:28:16.820042 systemd[1560]: Created slice app.slice - User Application Slice. Apr 30 03:28:16.820080 systemd[1560]: Reached target paths.target - Paths. Apr 30 03:28:16.820095 systemd[1560]: Reached target timers.target - Timers. Apr 30 03:28:16.822460 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:28:16.838092 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:28:16.838326 systemd[1560]: Reached target sockets.target - Sockets. Apr 30 03:28:16.838355 systemd[1560]: Reached target basic.target - Basic System. Apr 30 03:28:16.838464 systemd[1560]: Reached target default.target - Main User Target. Apr 30 03:28:16.838519 systemd[1560]: Startup finished in 141ms. Apr 30 03:28:16.839179 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:28:16.850273 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:28:16.914485 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:36120.service - OpenSSH per-connection server daemon (10.0.0.1:36120). Apr 30 03:28:16.959005 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:16.962059 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:16.971601 systemd-logind[1443]: New session 2 of user core. Apr 30 03:28:16.985300 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:28:17.048323 sshd[1571]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:17.908704 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:36120.service: Deactivated successfully. Apr 30 03:28:17.911935 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:28:17.914548 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:28:17.929521 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:36132.service - OpenSSH per-connection server daemon (10.0.0.1:36132). Apr 30 03:28:17.931722 systemd-logind[1443]: Removed session 2. Apr 30 03:28:17.966443 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 36132 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:17.968711 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:17.973995 systemd-logind[1443]: New session 3 of user core. Apr 30 03:28:17.984133 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:28:18.038254 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.049823 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:36132.service: Deactivated successfully. Apr 30 03:28:18.052615 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:28:18.054946 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:28:18.069464 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:36136.service - OpenSSH per-connection server daemon (10.0.0.1:36136). Apr 30 03:28:18.070596 systemd-logind[1443]: Removed session 3. Apr 30 03:28:18.104790 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 36136 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:18.107044 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.111568 systemd-logind[1443]: New session 4 of user core. Apr 30 03:28:18.122018 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:28:18.177740 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.203232 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:36136.service: Deactivated successfully. Apr 30 03:28:18.205276 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:28:18.207069 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:28:18.216392 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:36146.service - OpenSSH per-connection server daemon (10.0.0.1:36146). Apr 30 03:28:18.217993 systemd-logind[1443]: Removed session 4. Apr 30 03:28:18.254589 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 36146 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:18.256185 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.261106 systemd-logind[1443]: New session 5 of user core. Apr 30 03:28:18.277091 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:28:18.339399 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:28:18.339867 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:18.362200 sudo[1595]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:18.364451 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.377547 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:36146.service: Deactivated successfully. Apr 30 03:28:18.379746 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:28:18.381559 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:28:18.383220 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:36150.service - OpenSSH per-connection server daemon (10.0.0.1:36150). Apr 30 03:28:18.384096 systemd-logind[1443]: Removed session 5. Apr 30 03:28:18.424357 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 36150 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:18.427121 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.434794 systemd-logind[1443]: New session 6 of user core. Apr 30 03:28:18.449047 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:28:18.506521 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:28:18.506994 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:18.510875 sudo[1604]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:18.517425 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:28:18.517775 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:18.537326 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:18.539518 auditctl[1607]: No rules Apr 30 03:28:18.539999 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:28:18.540273 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:18.543209 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:28:18.577004 augenrules[1625]: No rules Apr 30 03:28:18.578999 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:28:18.580315 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:18.582522 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:18.599172 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:36150.service: Deactivated successfully. Apr 30 03:28:18.601068 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:28:18.602515 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:28:18.603994 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:36152.service - OpenSSH per-connection server daemon (10.0.0.1:36152). Apr 30 03:28:18.604855 systemd-logind[1443]: Removed session 6. Apr 30 03:28:18.644316 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 36152 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:28:18.645999 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:28:18.650094 systemd-logind[1443]: New session 7 of user core. Apr 30 03:28:18.660066 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:28:18.713738 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:28:18.714200 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:28:19.479315 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:28:19.479502 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:28:20.131926 dockerd[1654]: time="2025-04-30T03:28:20.131811112Z" level=info msg="Starting up" Apr 30 03:28:20.676285 dockerd[1654]: time="2025-04-30T03:28:20.676216417Z" level=info msg="Loading containers: start." Apr 30 03:28:20.819974 kernel: Initializing XFRM netlink socket Apr 30 03:28:20.915339 systemd-networkd[1398]: docker0: Link UP Apr 30 03:28:20.941265 dockerd[1654]: time="2025-04-30T03:28:20.941089550Z" level=info msg="Loading containers: done." Apr 30 03:28:20.966196 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3728714510-merged.mount: Deactivated successfully. Apr 30 03:28:20.969197 dockerd[1654]: time="2025-04-30T03:28:20.969136696Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:28:20.969325 dockerd[1654]: time="2025-04-30T03:28:20.969297303Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:28:20.969509 dockerd[1654]: time="2025-04-30T03:28:20.969481495Z" level=info msg="Daemon has completed initialization" Apr 30 03:28:21.129249 dockerd[1654]: time="2025-04-30T03:28:21.129075649Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:28:21.129984 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:28:22.120145 containerd[1458]: time="2025-04-30T03:28:22.120087350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:28:22.989856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336241020.mount: Deactivated successfully. Apr 30 03:28:25.146965 containerd[1458]: time="2025-04-30T03:28:25.146853585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.147709 containerd[1458]: time="2025-04-30T03:28:25.147630805Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:28:25.149075 containerd[1458]: time="2025-04-30T03:28:25.149029575Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.152234 containerd[1458]: time="2025-04-30T03:28:25.152168116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:25.153563 containerd[1458]: time="2025-04-30T03:28:25.153490674Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 3.033360853s" Apr 30 03:28:25.153563 containerd[1458]: time="2025-04-30T03:28:25.153539674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:28:25.179590 containerd[1458]: time="2025-04-30T03:28:25.179534846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:28:25.608827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:28:25.647014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:25.832388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:25.840773 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:25.953722 kubelet[1880]: E0430 03:28:25.953560 1880 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:25.961545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:25.961803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:28.315462 containerd[1458]: time="2025-04-30T03:28:28.315363957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:28.316675 containerd[1458]: time="2025-04-30T03:28:28.316587787Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:28:28.319171 containerd[1458]: time="2025-04-30T03:28:28.319128988Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:28.324388 containerd[1458]: time="2025-04-30T03:28:28.324311321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:28.325576 containerd[1458]: time="2025-04-30T03:28:28.325504732Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.145916214s" Apr 30 03:28:28.325576 containerd[1458]: time="2025-04-30T03:28:28.325563685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:28:28.403551 containerd[1458]: time="2025-04-30T03:28:28.403475533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:28:30.511688 containerd[1458]: time="2025-04-30T03:28:30.511599786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.512476 containerd[1458]: time="2025-04-30T03:28:30.512428764Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:28:30.513938 containerd[1458]: time="2025-04-30T03:28:30.513861315Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.520228 containerd[1458]: time="2025-04-30T03:28:30.520189881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:30.521864 containerd[1458]: time="2025-04-30T03:28:30.521803585Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.118275034s" Apr 30 03:28:30.521864 containerd[1458]: time="2025-04-30T03:28:30.521845600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:28:30.551572 containerd[1458]: time="2025-04-30T03:28:30.551513898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:28:31.757787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194662362.mount: Deactivated successfully. Apr 30 03:28:32.638767 containerd[1458]: time="2025-04-30T03:28:32.638678628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.640031 containerd[1458]: time="2025-04-30T03:28:32.639984922Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:28:32.641340 containerd[1458]: time="2025-04-30T03:28:32.641307211Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.643749 containerd[1458]: time="2025-04-30T03:28:32.643711452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:32.644460 containerd[1458]: time="2025-04-30T03:28:32.644405378Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.092838742s" Apr 30 03:28:32.644500 containerd[1458]: time="2025-04-30T03:28:32.644458682Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:28:32.671519 containerd[1458]: time="2025-04-30T03:28:32.671457945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:28:33.339990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883547594.mount: Deactivated successfully. Apr 30 03:28:34.159071 containerd[1458]: time="2025-04-30T03:28:34.159010214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:34.159886 containerd[1458]: time="2025-04-30T03:28:34.159811818Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:28:34.161049 containerd[1458]: time="2025-04-30T03:28:34.161013077Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:34.164816 containerd[1458]: time="2025-04-30T03:28:34.164784731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:34.166014 containerd[1458]: time="2025-04-30T03:28:34.165964312Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.494458291s" Apr 30 03:28:34.166054 containerd[1458]: time="2025-04-30T03:28:34.166017083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:28:34.190024 containerd[1458]: time="2025-04-30T03:28:34.189982104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:28:35.295818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942643837.mount: Deactivated successfully. Apr 30 03:28:35.301890 containerd[1458]: time="2025-04-30T03:28:35.301831897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.302667 containerd[1458]: time="2025-04-30T03:28:35.302575946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:28:35.303871 containerd[1458]: time="2025-04-30T03:28:35.303821561Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.306155 containerd[1458]: time="2025-04-30T03:28:35.306107253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:35.307005 containerd[1458]: time="2025-04-30T03:28:35.306956832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.116933659s" Apr 30 03:28:35.307005 containerd[1458]: time="2025-04-30T03:28:35.306991200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:28:35.392859 containerd[1458]: time="2025-04-30T03:28:35.392806453Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:28:36.060632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:28:36.074221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:36.242489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:36.248213 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:28:36.497816 kubelet[1991]: E0430 03:28:36.497316 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:28:36.502960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:28:36.503202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:28:36.576284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920283398.mount: Deactivated successfully. Apr 30 03:28:40.085859 containerd[1458]: time="2025-04-30T03:28:40.085754048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:40.091683 containerd[1458]: time="2025-04-30T03:28:40.091632903Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:28:40.093598 containerd[1458]: time="2025-04-30T03:28:40.093563948Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:40.097364 containerd[1458]: time="2025-04-30T03:28:40.097290655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:28:40.098655 containerd[1458]: time="2025-04-30T03:28:40.098616131Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.705773849s" Apr 30 03:28:40.098719 containerd[1458]: time="2025-04-30T03:28:40.098658305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:28:42.869320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:42.883513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:42.907107 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-7.scope)... Apr 30 03:28:42.907147 systemd[1]: Reloading... Apr 30 03:28:43.005933 zram_generator::config[2165]: No configuration found. Apr 30 03:28:43.516855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:43.601712 systemd[1]: Reloading finished in 693 ms. Apr 30 03:28:43.653496 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:28:43.653594 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:28:43.653876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:43.656661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:43.812225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:43.818098 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:43.900492 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:43.900492 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:43.900492 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:43.905188 kubelet[2213]: I0430 03:28:43.905125 2213 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:44.349350 kubelet[2213]: I0430 03:28:44.349277 2213 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:44.349350 kubelet[2213]: I0430 03:28:44.349326 2213 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:44.349611 kubelet[2213]: I0430 03:28:44.349583 2213 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:44.368747 kubelet[2213]: I0430 03:28:44.368655 2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:44.369698 kubelet[2213]: E0430 03:28:44.369662 2213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.386753 kubelet[2213]: I0430 03:28:44.386708 2213 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:44.388491 kubelet[2213]: I0430 03:28:44.388432 2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:44.388671 kubelet[2213]: I0430 03:28:44.388480 2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:44.389275 kubelet[2213]: I0430 03:28:44.389245 2213 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:44.389275 kubelet[2213]: I0430 03:28:44.389266 2213 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:44.389452 kubelet[2213]: I0430 03:28:44.389427 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:44.390349 kubelet[2213]: I0430 03:28:44.390331 2213 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:44.390349 kubelet[2213]: I0430 03:28:44.390351 2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:44.390439 kubelet[2213]: I0430 03:28:44.390386 2213 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:44.390439 kubelet[2213]: I0430 03:28:44.390406 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:44.391728 kubelet[2213]: W0430 03:28:44.391636 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.391728 kubelet[2213]: E0430 03:28:44.391699 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.395709 kubelet[2213]: W0430 03:28:44.395620 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.395709 kubelet[2213]: E0430 03:28:44.395695 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.397300 kubelet[2213]: I0430 03:28:44.397269 2213 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:44.399509 kubelet[2213]: I0430 03:28:44.399341 2213 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:44.399509 kubelet[2213]: W0430 03:28:44.399511 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:28:44.400349 kubelet[2213]: I0430 03:28:44.400330 2213 server.go:1264] "Started kubelet" Apr 30 03:28:44.402232 kubelet[2213]: I0430 03:28:44.401570 2213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:44.402575 kubelet[2213]: I0430 03:28:44.402532 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:44.404501 kubelet[2213]: I0430 03:28:44.402873 2213 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:44.404501 kubelet[2213]: I0430 03:28:44.404269 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:44.405525 kubelet[2213]: I0430 03:28:44.405019 2213 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:44.406983 kubelet[2213]: E0430 03:28:44.406950 2213 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:44.407318 kubelet[2213]: I0430 03:28:44.407293 2213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:44.408015 kubelet[2213]: I0430 03:28:44.407986 2213 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:44.408379 kubelet[2213]: I0430 03:28:44.408216 2213 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:44.408379 kubelet[2213]: E0430 03:28:44.408316 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Apr 30 03:28:44.408471 kubelet[2213]: W0430 03:28:44.408410 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.408471 kubelet[2213]: E0430 03:28:44.408467 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.411098 kubelet[2213]: E0430 03:28:44.410491 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183afaf25c7e5e13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 03:28:44.400303635 +0000 UTC m=+0.577849568,LastTimestamp:2025-04-30 03:28:44.400303635 +0000 UTC m=+0.577849568,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 03:28:44.411231 kubelet[2213]: I0430 03:28:44.411155 2213 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:44.411231 kubelet[2213]: I0430 03:28:44.411169 2213 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:44.411303 kubelet[2213]: I0430 03:28:44.411239 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:44.421850 kubelet[2213]: I0430 03:28:44.421782 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:44.423504 kubelet[2213]: I0430 03:28:44.423457 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:44.423504 kubelet[2213]: I0430 03:28:44.423499 2213 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:44.423665 kubelet[2213]: I0430 03:28:44.423533 2213 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:44.423665 kubelet[2213]: E0430 03:28:44.423581 2213 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:44.426301 kubelet[2213]: W0430 03:28:44.426194 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.426301 kubelet[2213]: E0430 03:28:44.426288 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:44.434054 kubelet[2213]: I0430 03:28:44.434014 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:44.434054 kubelet[2213]: I0430 03:28:44.434031 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:44.434054 kubelet[2213]: I0430 03:28:44.434047 2213 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:44.444634 kubelet[2213]: I0430 03:28:44.444581 2213 policy_none.go:49] "None policy: Start" Apr 30 03:28:44.445492 kubelet[2213]: I0430 03:28:44.445455 2213 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:44.445492 kubelet[2213]: I0430 03:28:44.445498 2213 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:44.454646 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:28:44.478810 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:28:44.482758 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:28:44.493484 kubelet[2213]: I0430 03:28:44.493388 2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:44.493839 kubelet[2213]: I0430 03:28:44.493791 2213 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:44.494105 kubelet[2213]: I0430 03:28:44.494037 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:44.495932 kubelet[2213]: E0430 03:28:44.495860 2213 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 03:28:44.509848 kubelet[2213]: I0430 03:28:44.509798 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:44.510301 kubelet[2213]: E0430 03:28:44.510236 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 30 03:28:44.524736 kubelet[2213]: I0430 03:28:44.524643 2213 topology_manager.go:215] "Topology Admit Handler" podUID="61558669f0feab7b6e3b8ffb6556fcc2" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 03:28:44.526398 kubelet[2213]: I0430 03:28:44.526354 2213 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 03:28:44.527450 kubelet[2213]: I0430 03:28:44.527414 2213 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 03:28:44.534997 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. Apr 30 03:28:44.559469 systemd[1]: Created slice kubepods-burstable-pod61558669f0feab7b6e3b8ffb6556fcc2.slice - libcontainer container kubepods-burstable-pod61558669f0feab7b6e3b8ffb6556fcc2.slice. Apr 30 03:28:44.564201 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. Apr 30 03:28:44.609242 kubelet[2213]: E0430 03:28:44.609075 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Apr 30 03:28:44.708655 kubelet[2213]: I0430 03:28:44.708535 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:44.708655 kubelet[2213]: I0430 03:28:44.708620 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:44.708655 kubelet[2213]: I0430 03:28:44.708642 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:44.708963 kubelet[2213]: I0430 03:28:44.708664 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:44.708963 kubelet[2213]: I0430 03:28:44.708730 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:28:44.708963 kubelet[2213]: I0430 03:28:44.708756 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:44.708963 kubelet[2213]: I0430 03:28:44.708774 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:44.708963 kubelet[2213]: I0430 03:28:44.708829 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:44.709163 kubelet[2213]: I0430 03:28:44.709130 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:44.714499 kubelet[2213]: I0430 03:28:44.714458 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:44.715007 kubelet[2213]: E0430 03:28:44.714975 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 30 03:28:44.855938 kubelet[2213]: E0430 03:28:44.855842 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:44.856796 containerd[1458]: time="2025-04-30T03:28:44.856738275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:44.863169 kubelet[2213]: E0430 03:28:44.863060 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:44.863691 containerd[1458]: time="2025-04-30T03:28:44.863630502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61558669f0feab7b6e3b8ffb6556fcc2,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:44.866981 kubelet[2213]: E0430 03:28:44.866944 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:44.867396 containerd[1458]: time="2025-04-30T03:28:44.867348316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" Apr 30 03:28:45.010090 kubelet[2213]: E0430 03:28:45.010023 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Apr 30 03:28:45.117155 kubelet[2213]: I0430 03:28:45.117024 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:45.117464 kubelet[2213]: E0430 03:28:45.117427 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 30 03:28:45.466529 kubelet[2213]: W0430 03:28:45.466370 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.466529 kubelet[2213]: E0430 03:28:45.466428 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.584935 kubelet[2213]: W0430 03:28:45.584833 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.584935 kubelet[2213]: E0430 03:28:45.584921 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.811033 kubelet[2213]: E0430 03:28:45.810972 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Apr 30 03:28:45.914052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182971411.mount: Deactivated successfully. Apr 30 03:28:45.914491 kubelet[2213]: W0430 03:28:45.914393 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.914491 kubelet[2213]: E0430 03:28:45.914428 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.918673 kubelet[2213]: I0430 03:28:45.918653 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:45.918929 kubelet[2213]: E0430 03:28:45.918908 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 30 03:28:45.925729 containerd[1458]: time="2025-04-30T03:28:45.925683627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:45.926744 containerd[1458]: time="2025-04-30T03:28:45.926652600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:45.927540 containerd[1458]: time="2025-04-30T03:28:45.927491623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:45.928557 containerd[1458]: time="2025-04-30T03:28:45.928506881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:45.929465 containerd[1458]: time="2025-04-30T03:28:45.929429327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:28:45.930280 containerd[1458]: time="2025-04-30T03:28:45.930237756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:28:45.931264 containerd[1458]: time="2025-04-30T03:28:45.931189443Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:45.937392 containerd[1458]: time="2025-04-30T03:28:45.937339857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:28:45.938353 containerd[1458]: time="2025-04-30T03:28:45.938312397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.070879605s" Apr 30 03:28:45.942010 containerd[1458]: time="2025-04-30T03:28:45.941953911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.085127061s" Apr 30 03:28:45.942952 containerd[1458]: time="2025-04-30T03:28:45.942880987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.079148322s" Apr 30 03:28:45.953672 kubelet[2213]: W0430 03:28:45.953604 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:45.953672 kubelet[2213]: E0430 03:28:45.953660 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219407423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219491837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219518491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219633728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219623377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:46.220118 containerd[1458]: time="2025-04-30T03:28:46.219730658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:46.223419 containerd[1458]: time="2025-04-30T03:28:46.222660438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.223419 containerd[1458]: time="2025-04-30T03:28:46.222812812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.226223 containerd[1458]: time="2025-04-30T03:28:46.226141253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:28:46.226223 containerd[1458]: time="2025-04-30T03:28:46.226193290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:28:46.226223 containerd[1458]: time="2025-04-30T03:28:46.226206837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.226340 containerd[1458]: time="2025-04-30T03:28:46.226283525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:28:46.286122 systemd[1]: Started cri-containerd-fb17aabe530f43162263b9a451330eea52853d56cebef8df3806d3cdb2a8fd55.scope - libcontainer container fb17aabe530f43162263b9a451330eea52853d56cebef8df3806d3cdb2a8fd55. Apr 30 03:28:46.291459 systemd[1]: Started cri-containerd-11ef4cd9bbe7496125ef18651bdc5b17e1ce20647469bf5aa6e1d3451b8a719a.scope - libcontainer container 11ef4cd9bbe7496125ef18651bdc5b17e1ce20647469bf5aa6e1d3451b8a719a. Apr 30 03:28:46.293614 systemd[1]: Started cri-containerd-89c0b73d1c8d52c08b80cae756abb15bc895cf1901f37126b4d8d1f282571c0c.scope - libcontainer container 89c0b73d1c8d52c08b80cae756abb15bc895cf1901f37126b4d8d1f282571c0c. Apr 30 03:28:46.367963 containerd[1458]: time="2025-04-30T03:28:46.347038673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61558669f0feab7b6e3b8ffb6556fcc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ef4cd9bbe7496125ef18651bdc5b17e1ce20647469bf5aa6e1d3451b8a719a\"" Apr 30 03:28:46.367963 containerd[1458]: time="2025-04-30T03:28:46.366770706Z" level=info msg="CreateContainer within sandbox \"11ef4cd9bbe7496125ef18651bdc5b17e1ce20647469bf5aa6e1d3451b8a719a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:28:46.368317 kubelet[2213]: E0430 03:28:46.357484 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:46.390228 containerd[1458]: time="2025-04-30T03:28:46.390141489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb17aabe530f43162263b9a451330eea52853d56cebef8df3806d3cdb2a8fd55\"" Apr 30 03:28:46.392302 kubelet[2213]: E0430 03:28:46.391925 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:46.392473 containerd[1458]: time="2025-04-30T03:28:46.392008176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c0b73d1c8d52c08b80cae756abb15bc895cf1901f37126b4d8d1f282571c0c\"" Apr 30 03:28:46.393303 kubelet[2213]: E0430 03:28:46.393270 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:46.398277 containerd[1458]: time="2025-04-30T03:28:46.398098423Z" level=info msg="CreateContainer within sandbox \"89c0b73d1c8d52c08b80cae756abb15bc895cf1901f37126b4d8d1f282571c0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:28:46.398277 containerd[1458]: time="2025-04-30T03:28:46.398157924Z" level=info msg="CreateContainer within sandbox \"fb17aabe530f43162263b9a451330eea52853d56cebef8df3806d3cdb2a8fd55\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:28:46.501843 kubelet[2213]: E0430 03:28:46.501675 2213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:47.213249 kubelet[2213]: W0430 03:28:47.213152 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:47.213249 kubelet[2213]: E0430 03:28:47.213229 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Apr 30 03:28:47.389369 containerd[1458]: time="2025-04-30T03:28:47.389281866Z" level=info msg="CreateContainer within sandbox \"11ef4cd9bbe7496125ef18651bdc5b17e1ce20647469bf5aa6e1d3451b8a719a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6db1282f0c9fbc0bafb8b1d95f356656c7413810fe20304d7162fedcc7a115c\"" Apr 30 03:28:47.390346 containerd[1458]: time="2025-04-30T03:28:47.390310210Z" level=info msg="StartContainer for \"c6db1282f0c9fbc0bafb8b1d95f356656c7413810fe20304d7162fedcc7a115c\"" Apr 30 03:28:47.412096 kubelet[2213]: E0430 03:28:47.412035 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="3.2s" Apr 30 03:28:47.430187 systemd[1]: Started cri-containerd-c6db1282f0c9fbc0bafb8b1d95f356656c7413810fe20304d7162fedcc7a115c.scope - libcontainer container c6db1282f0c9fbc0bafb8b1d95f356656c7413810fe20304d7162fedcc7a115c. Apr 30 03:28:47.581524 kubelet[2213]: I0430 03:28:47.520778 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:47.581524 kubelet[2213]: E0430 03:28:47.521218 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 30 03:28:47.911198 containerd[1458]: time="2025-04-30T03:28:47.911041014Z" level=info msg="StartContainer for \"c6db1282f0c9fbc0bafb8b1d95f356656c7413810fe20304d7162fedcc7a115c\" returns successfully" Apr 30 03:28:47.972247 containerd[1458]: time="2025-04-30T03:28:47.972177734Z" level=info msg="CreateContainer within sandbox \"fb17aabe530f43162263b9a451330eea52853d56cebef8df3806d3cdb2a8fd55\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fad0f8655eddf8ce62ccaa79db09279289103272bae74942f0bd1a8f8e82f7cd\"" Apr 30 03:28:47.973159 containerd[1458]: time="2025-04-30T03:28:47.973075800Z" level=info msg="StartContainer for \"fad0f8655eddf8ce62ccaa79db09279289103272bae74942f0bd1a8f8e82f7cd\"" Apr 30 03:28:47.991251 containerd[1458]: time="2025-04-30T03:28:47.991198857Z" level=info msg="CreateContainer within sandbox \"89c0b73d1c8d52c08b80cae756abb15bc895cf1901f37126b4d8d1f282571c0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621\"" Apr 30 03:28:47.991987 containerd[1458]: time="2025-04-30T03:28:47.991941616Z" level=info msg="StartContainer for \"932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621\"" Apr 30 03:28:48.106207 systemd[1]: Started cri-containerd-fad0f8655eddf8ce62ccaa79db09279289103272bae74942f0bd1a8f8e82f7cd.scope - libcontainer container fad0f8655eddf8ce62ccaa79db09279289103272bae74942f0bd1a8f8e82f7cd. Apr 30 03:28:48.142268 systemd[1]: Started cri-containerd-932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621.scope - libcontainer container 932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621. Apr 30 03:28:48.175997 containerd[1458]: time="2025-04-30T03:28:48.175327795Z" level=info msg="StartContainer for \"fad0f8655eddf8ce62ccaa79db09279289103272bae74942f0bd1a8f8e82f7cd\" returns successfully" Apr 30 03:28:48.232885 containerd[1458]: time="2025-04-30T03:28:48.232812228Z" level=info msg="StartContainer for \"932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621\" returns successfully" Apr 30 03:28:48.491124 kubelet[2213]: E0430 03:28:48.449951 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:48.491124 kubelet[2213]: E0430 03:28:48.450957 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:48.492793 kubelet[2213]: E0430 03:28:48.492657 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:48.910649 systemd[1]: run-containerd-runc-k8s.io-932b57bd4b0de2e131f2a958a6babe59b95573347220155ef00effa959300621-runc.4pARZJ.mount: Deactivated successfully. Apr 30 03:28:49.393780 kubelet[2213]: I0430 03:28:49.393732 2213 apiserver.go:52] "Watching apiserver" Apr 30 03:28:49.408082 kubelet[2213]: I0430 03:28:49.407990 2213 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:49.456338 kubelet[2213]: E0430 03:28:49.456276 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:49.456529 kubelet[2213]: E0430 03:28:49.456468 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:49.457196 kubelet[2213]: E0430 03:28:49.457073 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:49.520700 kubelet[2213]: E0430 03:28:49.520642 2213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 03:28:49.887945 kubelet[2213]: E0430 03:28:49.887873 2213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 03:28:50.427973 kubelet[2213]: E0430 03:28:50.427934 2213 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 03:28:50.456388 kubelet[2213]: E0430 03:28:50.456337 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:50.616232 kubelet[2213]: E0430 03:28:50.616174 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 03:28:50.723022 kubelet[2213]: I0430 03:28:50.722865 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:50.729388 kubelet[2213]: I0430 03:28:50.729133 2213 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 03:28:51.019784 kubelet[2213]: E0430 03:28:51.019618 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:51.459828 kubelet[2213]: E0430 03:28:51.457428 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:51.825646 systemd[1]: Reloading requested from client PID 2496 ('systemctl') (unit session-7.scope)... Apr 30 03:28:51.825666 systemd[1]: Reloading... Apr 30 03:28:51.907927 zram_generator::config[2535]: No configuration found. Apr 30 03:28:52.037909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:28:52.135569 systemd[1]: Reloading finished in 309 ms. Apr 30 03:28:52.179580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:52.193903 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:28:52.194196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:52.194258 systemd[1]: kubelet.service: Consumed 1.148s CPU time, 121.4M memory peak, 0B memory swap peak. Apr 30 03:28:52.201213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:28:52.373581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:28:52.380410 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:28:52.423739 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:52.423739 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:28:52.423739 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:28:52.423739 kubelet[2580]: I0430 03:28:52.423690 2580 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:28:52.429624 kubelet[2580]: I0430 03:28:52.429572 2580 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:28:52.429624 kubelet[2580]: I0430 03:28:52.429610 2580 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:28:52.429854 kubelet[2580]: I0430 03:28:52.429834 2580 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:28:52.432316 kubelet[2580]: I0430 03:28:52.431967 2580 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:28:52.433939 kubelet[2580]: I0430 03:28:52.433863 2580 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:28:52.441793 kubelet[2580]: I0430 03:28:52.441753 2580 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:28:52.442242 kubelet[2580]: I0430 03:28:52.442033 2580 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:28:52.442242 kubelet[2580]: I0430 03:28:52.442068 2580 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:28:52.442395 kubelet[2580]: I0430 03:28:52.442268 2580 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:28:52.442395 kubelet[2580]: I0430 03:28:52.442277 2580 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:28:52.442395 kubelet[2580]: I0430 03:28:52.442327 2580 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:52.442507 kubelet[2580]: I0430 03:28:52.442462 2580 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:28:52.442507 kubelet[2580]: I0430 03:28:52.442472 2580 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:28:52.442507 kubelet[2580]: I0430 03:28:52.442496 2580 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:28:52.442600 kubelet[2580]: I0430 03:28:52.442514 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:28:52.444301 kubelet[2580]: I0430 03:28:52.444270 2580 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:28:52.444530 kubelet[2580]: I0430 03:28:52.444441 2580 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:28:52.445664 kubelet[2580]: I0430 03:28:52.445639 2580 server.go:1264] "Started kubelet" Apr 30 03:28:52.446765 kubelet[2580]: I0430 03:28:52.446718 2580 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:28:52.447547 kubelet[2580]: I0430 03:28:52.447526 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:28:52.447624 kubelet[2580]: I0430 03:28:52.447611 2580 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:28:52.448448 kubelet[2580]: I0430 03:28:52.447756 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:28:52.448448 kubelet[2580]: I0430 03:28:52.448125 2580 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:28:52.458953 kubelet[2580]: I0430 03:28:52.456585 2580 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:28:52.458953 kubelet[2580]: I0430 03:28:52.456771 2580 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:28:52.458953 kubelet[2580]: I0430 03:28:52.457007 2580 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:28:52.458953 kubelet[2580]: E0430 03:28:52.457838 2580 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:28:52.458953 kubelet[2580]: I0430 03:28:52.458072 2580 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:28:52.458953 kubelet[2580]: I0430 03:28:52.458204 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:28:52.460185 kubelet[2580]: I0430 03:28:52.459808 2580 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:28:52.464500 kubelet[2580]: I0430 03:28:52.464424 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:28:52.466036 kubelet[2580]: I0430 03:28:52.465994 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:28:52.466036 kubelet[2580]: I0430 03:28:52.466035 2580 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:28:52.466134 kubelet[2580]: I0430 03:28:52.466054 2580 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:28:52.466134 kubelet[2580]: E0430 03:28:52.466098 2580 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:28:52.496510 kubelet[2580]: I0430 03:28:52.496472 2580 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:28:52.496510 kubelet[2580]: I0430 03:28:52.496490 2580 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:28:52.496510 kubelet[2580]: I0430 03:28:52.496510 2580 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:28:52.496732 kubelet[2580]: I0430 03:28:52.496681 2580 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:28:52.496732 kubelet[2580]: I0430 03:28:52.496692 2580 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:28:52.496732 kubelet[2580]: I0430 03:28:52.496710 2580 policy_none.go:49] "None policy: Start" Apr 30 03:28:52.497390 kubelet[2580]: I0430 03:28:52.497352 2580 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:28:52.497390 kubelet[2580]: I0430 03:28:52.497380 2580 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:28:52.497519 kubelet[2580]: I0430 03:28:52.497503 2580 state_mem.go:75] "Updated machine memory state" Apr 30 03:28:52.502139 kubelet[2580]: I0430 03:28:52.502105 2580 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:28:52.502586 kubelet[2580]: I0430 03:28:52.502368 2580 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:28:52.502586 kubelet[2580]: I0430 03:28:52.502501 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:28:52.562539 kubelet[2580]: I0430 03:28:52.562497 2580 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 03:28:52.566758 kubelet[2580]: I0430 03:28:52.566703 2580 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 03:28:52.566818 kubelet[2580]: I0430 03:28:52.566800 2580 topology_manager.go:215] "Topology Admit Handler" podUID="61558669f0feab7b6e3b8ffb6556fcc2" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 03:28:52.566892 kubelet[2580]: I0430 03:28:52.566869 2580 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 03:28:52.705155 kubelet[2580]: E0430 03:28:52.704981 2580 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.758428 kubelet[2580]: I0430 03:28:52.758345 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.758428 kubelet[2580]: I0430 03:28:52.758407 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:52.758428 kubelet[2580]: I0430 03:28:52.758426 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:52.758428 kubelet[2580]: I0430 03:28:52.758441 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.758714 kubelet[2580]: I0430 03:28:52.758457 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.758714 kubelet[2580]: I0430 03:28:52.758475 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.758714 kubelet[2580]: I0430 03:28:52.758492 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:28:52.758714 kubelet[2580]: I0430 03:28:52.758573 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61558669f0feab7b6e3b8ffb6556fcc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61558669f0feab7b6e3b8ffb6556fcc2\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:28:52.758714 kubelet[2580]: I0430 03:28:52.758632 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:28:52.772840 kubelet[2580]: I0430 03:28:52.772777 2580 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 30 03:28:52.773027 kubelet[2580]: I0430 03:28:52.772928 2580 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 03:28:52.984035 kubelet[2580]: E0430 03:28:52.983795 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:52.984238 kubelet[2580]: E0430 03:28:52.984124 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:53.006444 kubelet[2580]: E0430 03:28:53.006394 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:53.443477 kubelet[2580]: I0430 03:28:53.443424 2580 apiserver.go:52] "Watching apiserver" Apr 30 03:28:53.458011 kubelet[2580]: I0430 03:28:53.457949 2580 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:28:53.477136 kubelet[2580]: E0430 03:28:53.477064 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:53.477136 kubelet[2580]: E0430 03:28:53.477123 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:53.603765 kubelet[2580]: E0430 03:28:53.603626 2580 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 03:28:53.604472 kubelet[2580]: E0430 03:28:53.603952 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:53.604472 kubelet[2580]: I0430 03:28:53.604159 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.604141337 podStartE2EDuration="1.604141337s" podCreationTimestamp="2025-04-30 03:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:53.604050987 +0000 UTC m=+1.217458631" watchObservedRunningTime="2025-04-30 03:28:53.604141337 +0000 UTC m=+1.217548951" Apr 30 03:28:53.762768 kubelet[2580]: I0430 03:28:53.762579 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.762560356 podStartE2EDuration="3.762560356s" podCreationTimestamp="2025-04-30 03:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:53.753594879 +0000 UTC m=+1.367002503" watchObservedRunningTime="2025-04-30 03:28:53.762560356 +0000 UTC m=+1.375967970" Apr 30 03:28:53.774626 kubelet[2580]: I0430 03:28:53.774543 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.774519251 podStartE2EDuration="1.774519251s" podCreationTimestamp="2025-04-30 03:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:28:53.762681318 +0000 UTC m=+1.376088942" watchObservedRunningTime="2025-04-30 03:28:53.774519251 +0000 UTC m=+1.387926865" Apr 30 03:28:54.478523 kubelet[2580]: E0430 03:28:54.478469 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:54.479086 kubelet[2580]: E0430 03:28:54.478784 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:56.893940 kubelet[2580]: E0430 03:28:56.893843 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:58.640930 update_engine[1446]: I20250430 03:28:58.640560 1446 update_attempter.cc:509] Updating boot flags... Apr 30 03:28:58.689928 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2661) Apr 30 03:28:58.733940 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2660) Apr 30 03:28:58.788300 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 30 03:28:58.790643 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 30 03:28:58.795212 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:36152.service: Deactivated successfully. Apr 30 03:28:58.797694 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:28:58.797910 systemd[1]: session-7.scope: Consumed 5.813s CPU time, 192.9M memory peak, 0B memory swap peak. Apr 30 03:28:58.798471 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:28:58.799685 systemd-logind[1443]: Removed session 7. Apr 30 03:28:59.181063 kubelet[2580]: E0430 03:28:59.181011 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:28:59.487407 kubelet[2580]: E0430 03:28:59.487250 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:01.814279 kubelet[2580]: E0430 03:29:01.813825 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:02.492926 kubelet[2580]: E0430 03:29:02.492844 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:05.568556 kubelet[2580]: I0430 03:29:05.568397 2580 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:29:05.569431 kubelet[2580]: I0430 03:29:05.569145 2580 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:29:05.569528 containerd[1458]: time="2025-04-30T03:29:05.568978052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:29:06.374283 kubelet[2580]: I0430 03:29:06.373438 2580 topology_manager.go:215] "Topology Admit Handler" podUID="4dea8339-4f89-450a-ad85-96088eccfe38" podNamespace="kube-system" podName="kube-proxy-sngdb" Apr 30 03:29:06.384242 systemd[1]: Created slice kubepods-besteffort-pod4dea8339_4f89_450a_ad85_96088eccfe38.slice - libcontainer container kubepods-besteffort-pod4dea8339_4f89_450a_ad85_96088eccfe38.slice. Apr 30 03:29:06.444036 kubelet[2580]: I0430 03:29:06.443964 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2snj\" (UniqueName: \"kubernetes.io/projected/4dea8339-4f89-450a-ad85-96088eccfe38-kube-api-access-x2snj\") pod \"kube-proxy-sngdb\" (UID: \"4dea8339-4f89-450a-ad85-96088eccfe38\") " pod="kube-system/kube-proxy-sngdb" Apr 30 03:29:06.444036 kubelet[2580]: I0430 03:29:06.444014 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dea8339-4f89-450a-ad85-96088eccfe38-kube-proxy\") pod \"kube-proxy-sngdb\" (UID: \"4dea8339-4f89-450a-ad85-96088eccfe38\") " pod="kube-system/kube-proxy-sngdb" Apr 30 03:29:06.444036 kubelet[2580]: I0430 03:29:06.444035 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dea8339-4f89-450a-ad85-96088eccfe38-xtables-lock\") pod \"kube-proxy-sngdb\" (UID: \"4dea8339-4f89-450a-ad85-96088eccfe38\") " pod="kube-system/kube-proxy-sngdb" Apr 30 03:29:06.444036 kubelet[2580]: I0430 03:29:06.444051 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dea8339-4f89-450a-ad85-96088eccfe38-lib-modules\") pod \"kube-proxy-sngdb\" (UID: \"4dea8339-4f89-450a-ad85-96088eccfe38\") " pod="kube-system/kube-proxy-sngdb" Apr 30 03:29:06.694797 kubelet[2580]: E0430 03:29:06.694648 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:06.695670 containerd[1458]: time="2025-04-30T03:29:06.695614279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sngdb,Uid:4dea8339-4f89-450a-ad85-96088eccfe38,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:06.808581 kubelet[2580]: I0430 03:29:06.808522 2580 topology_manager.go:215] "Topology Admit Handler" podUID="c6875490-1f5d-4d71-a34f-51a87faad9a4" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-47qt6" Apr 30 03:29:06.816622 systemd[1]: Created slice kubepods-besteffort-podc6875490_1f5d_4d71_a34f_51a87faad9a4.slice - libcontainer container kubepods-besteffort-podc6875490_1f5d_4d71_a34f_51a87faad9a4.slice. Apr 30 03:29:06.944233 kubelet[2580]: E0430 03:29:06.944146 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:06.947593 kubelet[2580]: I0430 03:29:06.947421 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6875490-1f5d-4d71-a34f-51a87faad9a4-var-lib-calico\") pod \"tigera-operator-797db67f8-47qt6\" (UID: \"c6875490-1f5d-4d71-a34f-51a87faad9a4\") " pod="tigera-operator/tigera-operator-797db67f8-47qt6" Apr 30 03:29:06.947593 kubelet[2580]: I0430 03:29:06.947465 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2dm\" (UniqueName: \"kubernetes.io/projected/c6875490-1f5d-4d71-a34f-51a87faad9a4-kube-api-access-zk2dm\") pod \"tigera-operator-797db67f8-47qt6\" (UID: \"c6875490-1f5d-4d71-a34f-51a87faad9a4\") " pod="tigera-operator/tigera-operator-797db67f8-47qt6" Apr 30 03:29:07.112298 containerd[1458]: time="2025-04-30T03:29:07.112181789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:07.112298 containerd[1458]: time="2025-04-30T03:29:07.112285230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:07.112520 containerd[1458]: time="2025-04-30T03:29:07.112308165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:07.112520 containerd[1458]: time="2025-04-30T03:29:07.112415512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:07.120182 containerd[1458]: time="2025-04-30T03:29:07.119733956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-47qt6,Uid:c6875490-1f5d-4d71-a34f-51a87faad9a4,Namespace:tigera-operator,Attempt:0,}" Apr 30 03:29:07.137057 systemd[1]: Started cri-containerd-6fb2b88d5890a227b183e372f87f43b0644b394e040fe9c0f44b2403f7ea2c15.scope - libcontainer container 6fb2b88d5890a227b183e372f87f43b0644b394e040fe9c0f44b2403f7ea2c15. Apr 30 03:29:07.161016 containerd[1458]: time="2025-04-30T03:29:07.160972799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sngdb,Uid:4dea8339-4f89-450a-ad85-96088eccfe38,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fb2b88d5890a227b183e372f87f43b0644b394e040fe9c0f44b2403f7ea2c15\"" Apr 30 03:29:07.161686 kubelet[2580]: E0430 03:29:07.161641 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:07.163662 containerd[1458]: time="2025-04-30T03:29:07.163618764Z" level=info msg="CreateContainer within sandbox \"6fb2b88d5890a227b183e372f87f43b0644b394e040fe9c0f44b2403f7ea2c15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:29:07.332707 containerd[1458]: time="2025-04-30T03:29:07.332656798Z" level=info msg="CreateContainer within sandbox \"6fb2b88d5890a227b183e372f87f43b0644b394e040fe9c0f44b2403f7ea2c15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f3291e4f21ca83ab618945db51ef2e81f8ab89fc6304f54de8356cb14c5d45d4\"" Apr 30 03:29:07.333324 containerd[1458]: time="2025-04-30T03:29:07.333296547Z" level=info msg="StartContainer for \"f3291e4f21ca83ab618945db51ef2e81f8ab89fc6304f54de8356cb14c5d45d4\"" Apr 30 03:29:07.344087 containerd[1458]: time="2025-04-30T03:29:07.343854105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:07.344087 containerd[1458]: time="2025-04-30T03:29:07.343940332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:07.344087 containerd[1458]: time="2025-04-30T03:29:07.343955651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:07.344579 containerd[1458]: time="2025-04-30T03:29:07.344528411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:07.362095 systemd[1]: Started cri-containerd-f3291e4f21ca83ab618945db51ef2e81f8ab89fc6304f54de8356cb14c5d45d4.scope - libcontainer container f3291e4f21ca83ab618945db51ef2e81f8ab89fc6304f54de8356cb14c5d45d4. Apr 30 03:29:07.366083 systemd[1]: Started cri-containerd-868c22f74f46db835247b46f84442e677e4a26776894f02db7f2fcdf953ad64f.scope - libcontainer container 868c22f74f46db835247b46f84442e677e4a26776894f02db7f2fcdf953ad64f. Apr 30 03:29:07.407273 containerd[1458]: time="2025-04-30T03:29:07.407117274Z" level=info msg="StartContainer for \"f3291e4f21ca83ab618945db51ef2e81f8ab89fc6304f54de8356cb14c5d45d4\" returns successfully" Apr 30 03:29:07.414082 containerd[1458]: time="2025-04-30T03:29:07.413955298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-47qt6,Uid:c6875490-1f5d-4d71-a34f-51a87faad9a4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"868c22f74f46db835247b46f84442e677e4a26776894f02db7f2fcdf953ad64f\"" Apr 30 03:29:07.416581 containerd[1458]: time="2025-04-30T03:29:07.416532700Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 03:29:07.503222 kubelet[2580]: E0430 03:29:07.503154 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:07.505499 kubelet[2580]: E0430 03:29:07.505389 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:07.515088 kubelet[2580]: I0430 03:29:07.515010 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sngdb" podStartSLOduration=1.514988531 podStartE2EDuration="1.514988531s" podCreationTimestamp="2025-04-30 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:07.514391263 +0000 UTC m=+15.127798877" watchObservedRunningTime="2025-04-30 03:29:07.514988531 +0000 UTC m=+15.128396145" Apr 30 03:29:13.210031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968404806.mount: Deactivated successfully. Apr 30 03:29:14.157007 containerd[1458]: time="2025-04-30T03:29:14.156887573Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.228593 containerd[1458]: time="2025-04-30T03:29:14.228473011Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 03:29:14.271786 containerd[1458]: time="2025-04-30T03:29:14.271680248Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.292654 containerd[1458]: time="2025-04-30T03:29:14.292549767Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:14.293439 containerd[1458]: time="2025-04-30T03:29:14.293382937Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 6.876799049s" Apr 30 03:29:14.293439 containerd[1458]: time="2025-04-30T03:29:14.293424317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 03:29:14.299304 containerd[1458]: time="2025-04-30T03:29:14.299258981Z" level=info msg="CreateContainer within sandbox \"868c22f74f46db835247b46f84442e677e4a26776894f02db7f2fcdf953ad64f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 03:29:14.524391 containerd[1458]: time="2025-04-30T03:29:14.524185026Z" level=info msg="CreateContainer within sandbox \"868c22f74f46db835247b46f84442e677e4a26776894f02db7f2fcdf953ad64f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6c3e6c3b686d31cc760c1a3dedf959a73aee79cbe40b64b239ec61949d18247e\"" Apr 30 03:29:14.525031 containerd[1458]: time="2025-04-30T03:29:14.524984673Z" level=info msg="StartContainer for \"6c3e6c3b686d31cc760c1a3dedf959a73aee79cbe40b64b239ec61949d18247e\"" Apr 30 03:29:14.563644 systemd[1]: Started cri-containerd-6c3e6c3b686d31cc760c1a3dedf959a73aee79cbe40b64b239ec61949d18247e.scope - libcontainer container 6c3e6c3b686d31cc760c1a3dedf959a73aee79cbe40b64b239ec61949d18247e. Apr 30 03:29:14.598353 containerd[1458]: time="2025-04-30T03:29:14.598288592Z" level=info msg="StartContainer for \"6c3e6c3b686d31cc760c1a3dedf959a73aee79cbe40b64b239ec61949d18247e\" returns successfully" Apr 30 03:29:18.454577 kubelet[2580]: I0430 03:29:18.454505 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-47qt6" podStartSLOduration=5.57237783 podStartE2EDuration="12.454483817s" podCreationTimestamp="2025-04-30 03:29:06 +0000 UTC" firstStartedPulling="2025-04-30 03:29:07.415332004 +0000 UTC m=+15.028739618" lastFinishedPulling="2025-04-30 03:29:14.297437991 +0000 UTC m=+21.910845605" observedRunningTime="2025-04-30 03:29:15.556219477 +0000 UTC m=+23.169627101" watchObservedRunningTime="2025-04-30 03:29:18.454483817 +0000 UTC m=+26.067891431" Apr 30 03:29:18.456021 kubelet[2580]: I0430 03:29:18.455989 2580 topology_manager.go:215] "Topology Admit Handler" podUID="51ab8b92-3020-452f-bc43-ee041d543252" podNamespace="calico-system" podName="calico-typha-744b8b5d9f-4wf7q" Apr 30 03:29:18.468929 systemd[1]: Created slice kubepods-besteffort-pod51ab8b92_3020_452f_bc43_ee041d543252.slice - libcontainer container kubepods-besteffort-pod51ab8b92_3020_452f_bc43_ee041d543252.slice. Apr 30 03:29:18.524841 kubelet[2580]: I0430 03:29:18.524753 2580 topology_manager.go:215] "Topology Admit Handler" podUID="756f6cdf-82d2-421d-a1ee-f6a80ca2608d" podNamespace="calico-system" podName="calico-node-n8z9s" Apr 30 03:29:18.536819 systemd[1]: Created slice kubepods-besteffort-pod756f6cdf_82d2_421d_a1ee_f6a80ca2608d.slice - libcontainer container kubepods-besteffort-pod756f6cdf_82d2_421d_a1ee_f6a80ca2608d.slice. Apr 30 03:29:18.616238 kubelet[2580]: I0430 03:29:18.616177 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51ab8b92-3020-452f-bc43-ee041d543252-tigera-ca-bundle\") pod \"calico-typha-744b8b5d9f-4wf7q\" (UID: \"51ab8b92-3020-452f-bc43-ee041d543252\") " pod="calico-system/calico-typha-744b8b5d9f-4wf7q" Apr 30 03:29:18.616238 kubelet[2580]: I0430 03:29:18.616230 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rmq\" (UniqueName: \"kubernetes.io/projected/51ab8b92-3020-452f-bc43-ee041d543252-kube-api-access-99rmq\") pod \"calico-typha-744b8b5d9f-4wf7q\" (UID: \"51ab8b92-3020-452f-bc43-ee041d543252\") " pod="calico-system/calico-typha-744b8b5d9f-4wf7q" Apr 30 03:29:18.616440 kubelet[2580]: I0430 03:29:18.616290 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/51ab8b92-3020-452f-bc43-ee041d543252-typha-certs\") pod \"calico-typha-744b8b5d9f-4wf7q\" (UID: \"51ab8b92-3020-452f-bc43-ee041d543252\") " pod="calico-system/calico-typha-744b8b5d9f-4wf7q" Apr 30 03:29:18.634114 kubelet[2580]: I0430 03:29:18.634058 2580 topology_manager.go:215] "Topology Admit Handler" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" podNamespace="calico-system" podName="csi-node-driver-nwhp2" Apr 30 03:29:18.634370 kubelet[2580]: E0430 03:29:18.634349 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:18.717907 kubelet[2580]: I0430 03:29:18.717456 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-tigera-ca-bundle\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.717907 kubelet[2580]: I0430 03:29:18.717515 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-node-certs\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.717907 kubelet[2580]: I0430 03:29:18.717533 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-xtables-lock\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.717907 kubelet[2580]: I0430 03:29:18.717549 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-flexvol-driver-host\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.717907 kubelet[2580]: I0430 03:29:18.717580 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-cni-bin-dir\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718132 kubelet[2580]: I0430 03:29:18.717596 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-policysync\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718132 kubelet[2580]: I0430 03:29:18.717612 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-var-run-calico\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718132 kubelet[2580]: I0430 03:29:18.717628 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-cni-log-dir\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718132 kubelet[2580]: I0430 03:29:18.717643 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-cni-net-dir\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718132 kubelet[2580]: I0430 03:29:18.717662 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-var-lib-calico\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718276 kubelet[2580]: I0430 03:29:18.717678 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbtr\" (UniqueName: \"kubernetes.io/projected/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-kube-api-access-9hbtr\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.718276 kubelet[2580]: I0430 03:29:18.717703 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/756f6cdf-82d2-421d-a1ee-f6a80ca2608d-lib-modules\") pod \"calico-node-n8z9s\" (UID: \"756f6cdf-82d2-421d-a1ee-f6a80ca2608d\") " pod="calico-system/calico-node-n8z9s" Apr 30 03:29:18.783477 kubelet[2580]: E0430 03:29:18.783436 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:18.784526 containerd[1458]: time="2025-04-30T03:29:18.784473052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744b8b5d9f-4wf7q,Uid:51ab8b92-3020-452f-bc43-ee041d543252,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:18.814081 containerd[1458]: time="2025-04-30T03:29:18.813759221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:18.814081 containerd[1458]: time="2025-04-30T03:29:18.813838143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:18.814081 containerd[1458]: time="2025-04-30T03:29:18.813853732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.814081 containerd[1458]: time="2025-04-30T03:29:18.813999281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.819113 kubelet[2580]: I0430 03:29:18.819059 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2df212b-6d14-4f4c-afa3-02f09ab15590-registration-dir\") pod \"csi-node-driver-nwhp2\" (UID: \"e2df212b-6d14-4f4c-afa3-02f09ab15590\") " pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:18.819338 kubelet[2580]: I0430 03:29:18.819301 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e2df212b-6d14-4f4c-afa3-02f09ab15590-varrun\") pod \"csi-node-driver-nwhp2\" (UID: \"e2df212b-6d14-4f4c-afa3-02f09ab15590\") " pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:18.819580 kubelet[2580]: I0430 03:29:18.819535 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2df212b-6d14-4f4c-afa3-02f09ab15590-socket-dir\") pod \"csi-node-driver-nwhp2\" (UID: \"e2df212b-6d14-4f4c-afa3-02f09ab15590\") " pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:18.819742 kubelet[2580]: I0430 03:29:18.819636 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2n7t\" (UniqueName: \"kubernetes.io/projected/e2df212b-6d14-4f4c-afa3-02f09ab15590-kube-api-access-x2n7t\") pod \"csi-node-driver-nwhp2\" (UID: \"e2df212b-6d14-4f4c-afa3-02f09ab15590\") " pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:18.819742 kubelet[2580]: I0430 03:29:18.819694 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2df212b-6d14-4f4c-afa3-02f09ab15590-kubelet-dir\") pod \"csi-node-driver-nwhp2\" (UID: \"e2df212b-6d14-4f4c-afa3-02f09ab15590\") " pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:18.831657 kubelet[2580]: E0430 03:29:18.831092 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.831774 kubelet[2580]: W0430 03:29:18.831694 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.831774 kubelet[2580]: E0430 03:29:18.831762 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.832178 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.833975 kubelet[2580]: W0430 03:29:18.832195 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.832296 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.832562 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.833975 kubelet[2580]: W0430 03:29:18.832574 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.832686 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.833072 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.833975 kubelet[2580]: W0430 03:29:18.833084 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.833103 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.833975 kubelet[2580]: E0430 03:29:18.833362 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.834257 kubelet[2580]: W0430 03:29:18.833375 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.834257 kubelet[2580]: E0430 03:29:18.833391 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.834257 kubelet[2580]: E0430 03:29:18.833641 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.834257 kubelet[2580]: W0430 03:29:18.833652 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.834257 kubelet[2580]: E0430 03:29:18.833667 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.834257 kubelet[2580]: E0430 03:29:18.834190 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.834257 kubelet[2580]: W0430 03:29:18.834201 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.834257 kubelet[2580]: E0430 03:29:18.834223 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.835103 kubelet[2580]: E0430 03:29:18.835081 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.835103 kubelet[2580]: W0430 03:29:18.835102 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.835188 kubelet[2580]: E0430 03:29:18.835114 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.836258 kubelet[2580]: E0430 03:29:18.835435 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.836258 kubelet[2580]: W0430 03:29:18.835450 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.836258 kubelet[2580]: E0430 03:29:18.835462 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.837730 kubelet[2580]: E0430 03:29:18.837715 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.837839 kubelet[2580]: W0430 03:29:18.837795 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.837839 kubelet[2580]: E0430 03:29:18.837812 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.839305 kubelet[2580]: E0430 03:29:18.839268 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.839305 kubelet[2580]: W0430 03:29:18.839299 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.839372 kubelet[2580]: E0430 03:29:18.839329 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.842636 kubelet[2580]: E0430 03:29:18.842345 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:18.842992 containerd[1458]: time="2025-04-30T03:29:18.842958815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8z9s,Uid:756f6cdf-82d2-421d-a1ee-f6a80ca2608d,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:18.853183 systemd[1]: Started cri-containerd-3767de539e269376f2e21791b495c85afed07dcdd9d836549f2c72635e431106.scope - libcontainer container 3767de539e269376f2e21791b495c85afed07dcdd9d836549f2c72635e431106. Apr 30 03:29:18.877196 containerd[1458]: time="2025-04-30T03:29:18.877102009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:18.877416 containerd[1458]: time="2025-04-30T03:29:18.877241417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:18.877416 containerd[1458]: time="2025-04-30T03:29:18.877299277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.879383 containerd[1458]: time="2025-04-30T03:29:18.878378916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:18.901117 systemd[1]: Started cri-containerd-8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff.scope - libcontainer container 8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff. Apr 30 03:29:18.902500 containerd[1458]: time="2025-04-30T03:29:18.902142795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744b8b5d9f-4wf7q,Uid:51ab8b92-3020-452f-bc43-ee041d543252,Namespace:calico-system,Attempt:0,} returns sandbox id \"3767de539e269376f2e21791b495c85afed07dcdd9d836549f2c72635e431106\"" Apr 30 03:29:18.904470 kubelet[2580]: E0430 03:29:18.903157 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:18.907095 containerd[1458]: time="2025-04-30T03:29:18.907052612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 03:29:18.920357 kubelet[2580]: E0430 03:29:18.920307 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.920357 kubelet[2580]: W0430 03:29:18.920331 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.920357 kubelet[2580]: E0430 03:29:18.920350 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.920648 kubelet[2580]: E0430 03:29:18.920597 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.920648 kubelet[2580]: W0430 03:29:18.920607 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.920648 kubelet[2580]: E0430 03:29:18.920619 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.921316 kubelet[2580]: E0430 03:29:18.920987 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.921316 kubelet[2580]: W0430 03:29:18.921021 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.921316 kubelet[2580]: E0430 03:29:18.921068 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.921466 kubelet[2580]: E0430 03:29:18.921446 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.921466 kubelet[2580]: W0430 03:29:18.921459 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.921584 kubelet[2580]: E0430 03:29:18.921478 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.921732 kubelet[2580]: E0430 03:29:18.921706 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.921732 kubelet[2580]: W0430 03:29:18.921725 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.921868 kubelet[2580]: E0430 03:29:18.921744 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.922680 kubelet[2580]: E0430 03:29:18.922518 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.922680 kubelet[2580]: W0430 03:29:18.922539 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.922680 kubelet[2580]: E0430 03:29:18.922556 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.922877 kubelet[2580]: E0430 03:29:18.922851 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.922877 kubelet[2580]: W0430 03:29:18.922866 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.923186 kubelet[2580]: E0430 03:29:18.922994 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.923186 kubelet[2580]: E0430 03:29:18.923133 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.923186 kubelet[2580]: W0430 03:29:18.923141 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.923312 kubelet[2580]: E0430 03:29:18.923232 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.923411 kubelet[2580]: E0430 03:29:18.923390 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.923411 kubelet[2580]: W0430 03:29:18.923403 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.923565 kubelet[2580]: E0430 03:29:18.923513 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.923668 kubelet[2580]: E0430 03:29:18.923639 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.923668 kubelet[2580]: W0430 03:29:18.923648 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.923949 kubelet[2580]: E0430 03:29:18.923919 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.925875 kubelet[2580]: E0430 03:29:18.925744 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.925875 kubelet[2580]: W0430 03:29:18.925759 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.926092 kubelet[2580]: E0430 03:29:18.926075 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.926243 kubelet[2580]: W0430 03:29:18.926113 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.926243 kubelet[2580]: E0430 03:29:18.926111 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.926243 kubelet[2580]: E0430 03:29:18.926155 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.926992 kubelet[2580]: E0430 03:29:18.926620 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.926992 kubelet[2580]: W0430 03:29:18.926634 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.926992 kubelet[2580]: E0430 03:29:18.926672 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.928121 kubelet[2580]: E0430 03:29:18.928099 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.928121 kubelet[2580]: W0430 03:29:18.928114 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.928277 kubelet[2580]: E0430 03:29:18.928241 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.928432 kubelet[2580]: E0430 03:29:18.928413 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.928432 kubelet[2580]: W0430 03:29:18.928426 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.928580 kubelet[2580]: E0430 03:29:18.928555 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.929138 kubelet[2580]: E0430 03:29:18.929115 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.929138 kubelet[2580]: W0430 03:29:18.929131 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.929265 kubelet[2580]: E0430 03:29:18.929188 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.929971 kubelet[2580]: E0430 03:29:18.929944 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.929971 kubelet[2580]: W0430 03:29:18.929963 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.930236 kubelet[2580]: E0430 03:29:18.930137 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.930657 kubelet[2580]: E0430 03:29:18.930633 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.930657 kubelet[2580]: W0430 03:29:18.930650 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.930817 kubelet[2580]: E0430 03:29:18.930765 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.931054 kubelet[2580]: E0430 03:29:18.931012 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.931054 kubelet[2580]: W0430 03:29:18.931027 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.931184 kubelet[2580]: E0430 03:29:18.931165 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.931377 kubelet[2580]: E0430 03:29:18.931361 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.931542 kubelet[2580]: W0430 03:29:18.931438 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.931756 kubelet[2580]: E0430 03:29:18.931654 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.933237 kubelet[2580]: E0430 03:29:18.933190 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.933336 containerd[1458]: time="2025-04-30T03:29:18.933224174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8z9s,Uid:756f6cdf-82d2-421d-a1ee-f6a80ca2608d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\"" Apr 30 03:29:18.933586 kubelet[2580]: W0430 03:29:18.933435 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.933586 kubelet[2580]: E0430 03:29:18.933536 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.933884 kubelet[2580]: E0430 03:29:18.933861 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:18.934161 kubelet[2580]: E0430 03:29:18.933978 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.934161 kubelet[2580]: W0430 03:29:18.933990 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.934309 kubelet[2580]: E0430 03:29:18.934292 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.934462 kubelet[2580]: E0430 03:29:18.934447 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.934540 kubelet[2580]: W0430 03:29:18.934525 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.934688 kubelet[2580]: E0430 03:29:18.934672 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.935149 kubelet[2580]: E0430 03:29:18.935091 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.935149 kubelet[2580]: W0430 03:29:18.935106 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.935149 kubelet[2580]: E0430 03:29:18.935120 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:18.936785 kubelet[2580]: E0430 03:29:18.936752 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:18.936785 kubelet[2580]: W0430 03:29:18.936778 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:18.937024 kubelet[2580]: E0430 03:29:18.936804 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:19.012319 kubelet[2580]: E0430 03:29:19.012186 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:19.012319 kubelet[2580]: W0430 03:29:19.012236 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:19.012319 kubelet[2580]: E0430 03:29:19.012262 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:20.467174 kubelet[2580]: E0430 03:29:20.467104 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:20.758817 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:44174.service - OpenSSH per-connection server daemon (10.0.0.1:44174). Apr 30 03:29:20.805048 sshd[3122]: Accepted publickey for core from 10.0.0.1 port 44174 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:20.807470 sshd[3122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:20.812856 systemd-logind[1443]: New session 8 of user core. Apr 30 03:29:20.824043 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:29:20.957024 sshd[3122]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:20.962156 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:44174.service: Deactivated successfully. Apr 30 03:29:20.964430 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:29:20.965059 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:29:20.966088 systemd-logind[1443]: Removed session 8. Apr 30 03:29:21.414511 containerd[1458]: time="2025-04-30T03:29:21.414402996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.415464 containerd[1458]: time="2025-04-30T03:29:21.415353504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 03:29:21.417471 containerd[1458]: time="2025-04-30T03:29:21.417349070Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.421814 containerd[1458]: time="2025-04-30T03:29:21.421749225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:21.422494 containerd[1458]: time="2025-04-30T03:29:21.422443993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.515345454s" Apr 30 03:29:21.422494 containerd[1458]: time="2025-04-30T03:29:21.422489360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 03:29:21.424354 containerd[1458]: time="2025-04-30T03:29:21.424279273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 03:29:21.439557 containerd[1458]: time="2025-04-30T03:29:21.439451878Z" level=info msg="CreateContainer within sandbox \"3767de539e269376f2e21791b495c85afed07dcdd9d836549f2c72635e431106\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 03:29:21.462642 containerd[1458]: time="2025-04-30T03:29:21.462574426Z" level=info msg="CreateContainer within sandbox \"3767de539e269376f2e21791b495c85afed07dcdd9d836549f2c72635e431106\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"23967406d94954e95593a7af02a55673ebed5d25de218385a39f82cf32515f9b\"" Apr 30 03:29:21.463400 containerd[1458]: time="2025-04-30T03:29:21.463350380Z" level=info msg="StartContainer for \"23967406d94954e95593a7af02a55673ebed5d25de218385a39f82cf32515f9b\"" Apr 30 03:29:21.506091 systemd[1]: Started cri-containerd-23967406d94954e95593a7af02a55673ebed5d25de218385a39f82cf32515f9b.scope - libcontainer container 23967406d94954e95593a7af02a55673ebed5d25de218385a39f82cf32515f9b. Apr 30 03:29:21.571549 containerd[1458]: time="2025-04-30T03:29:21.571491292Z" level=info msg="StartContainer for \"23967406d94954e95593a7af02a55673ebed5d25de218385a39f82cf32515f9b\" returns successfully" Apr 30 03:29:22.520367 kubelet[2580]: E0430 03:29:22.520294 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:22.546122 kubelet[2580]: E0430 03:29:22.546071 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:22.632933 kubelet[2580]: I0430 03:29:22.632773 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-744b8b5d9f-4wf7q" podStartSLOduration=2.115499257 podStartE2EDuration="4.632753993s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:29:18.906655421 +0000 UTC m=+26.520063035" lastFinishedPulling="2025-04-30 03:29:21.423910157 +0000 UTC m=+29.037317771" observedRunningTime="2025-04-30 03:29:22.6324286 +0000 UTC m=+30.245836214" watchObservedRunningTime="2025-04-30 03:29:22.632753993 +0000 UTC m=+30.246161627" Apr 30 03:29:22.645571 kubelet[2580]: E0430 03:29:22.645520 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.645571 kubelet[2580]: W0430 03:29:22.645550 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.645571 kubelet[2580]: E0430 03:29:22.645572 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.645827 kubelet[2580]: E0430 03:29:22.645817 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.645864 kubelet[2580]: W0430 03:29:22.645828 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.645864 kubelet[2580]: E0430 03:29:22.645839 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.646155 kubelet[2580]: E0430 03:29:22.646132 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.646155 kubelet[2580]: W0430 03:29:22.646146 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.646229 kubelet[2580]: E0430 03:29:22.646156 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.646428 kubelet[2580]: E0430 03:29:22.646404 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.646428 kubelet[2580]: W0430 03:29:22.646420 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.646497 kubelet[2580]: E0430 03:29:22.646431 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.646680 kubelet[2580]: E0430 03:29:22.646655 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.646680 kubelet[2580]: W0430 03:29:22.646669 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.646680 kubelet[2580]: E0430 03:29:22.646679 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.646921 kubelet[2580]: E0430 03:29:22.646890 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.646921 kubelet[2580]: W0430 03:29:22.646919 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.646997 kubelet[2580]: E0430 03:29:22.646930 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.647157 kubelet[2580]: E0430 03:29:22.647140 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.647157 kubelet[2580]: W0430 03:29:22.647152 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.647238 kubelet[2580]: E0430 03:29:22.647162 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.647400 kubelet[2580]: E0430 03:29:22.647383 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.647400 kubelet[2580]: W0430 03:29:22.647395 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.647472 kubelet[2580]: E0430 03:29:22.647405 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.647647 kubelet[2580]: E0430 03:29:22.647630 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.647647 kubelet[2580]: W0430 03:29:22.647643 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.647790 kubelet[2580]: E0430 03:29:22.647653 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.647863 kubelet[2580]: E0430 03:29:22.647848 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.647863 kubelet[2580]: W0430 03:29:22.647860 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.647957 kubelet[2580]: E0430 03:29:22.647870 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.648127 kubelet[2580]: E0430 03:29:22.648107 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.648127 kubelet[2580]: W0430 03:29:22.648123 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.648226 kubelet[2580]: E0430 03:29:22.648136 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.648404 kubelet[2580]: E0430 03:29:22.648386 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.648404 kubelet[2580]: W0430 03:29:22.648400 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.648483 kubelet[2580]: E0430 03:29:22.648411 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.648656 kubelet[2580]: E0430 03:29:22.648641 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.648656 kubelet[2580]: W0430 03:29:22.648653 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.648756 kubelet[2580]: E0430 03:29:22.648664 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.648947 kubelet[2580]: E0430 03:29:22.648930 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.648995 kubelet[2580]: W0430 03:29:22.648950 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.648995 kubelet[2580]: E0430 03:29:22.648961 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.649235 kubelet[2580]: E0430 03:29:22.649213 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.649235 kubelet[2580]: W0430 03:29:22.649228 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.649345 kubelet[2580]: E0430 03:29:22.649239 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.746341 kubelet[2580]: E0430 03:29:22.746301 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.746341 kubelet[2580]: W0430 03:29:22.746330 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.746341 kubelet[2580]: E0430 03:29:22.746351 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.746673 kubelet[2580]: E0430 03:29:22.746654 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.746673 kubelet[2580]: W0430 03:29:22.746668 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.746786 kubelet[2580]: E0430 03:29:22.746684 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.747066 kubelet[2580]: E0430 03:29:22.747027 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.747066 kubelet[2580]: W0430 03:29:22.747056 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.747258 kubelet[2580]: E0430 03:29:22.747092 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.747488 kubelet[2580]: E0430 03:29:22.747472 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.747488 kubelet[2580]: W0430 03:29:22.747486 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.747576 kubelet[2580]: E0430 03:29:22.747501 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.747838 kubelet[2580]: E0430 03:29:22.747817 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.747838 kubelet[2580]: W0430 03:29:22.747836 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.747951 kubelet[2580]: E0430 03:29:22.747851 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.748236 kubelet[2580]: E0430 03:29:22.748208 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.748236 kubelet[2580]: W0430 03:29:22.748235 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.748332 kubelet[2580]: E0430 03:29:22.748257 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.748728 kubelet[2580]: E0430 03:29:22.748584 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.748728 kubelet[2580]: W0430 03:29:22.748616 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.748728 kubelet[2580]: E0430 03:29:22.748632 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.748846 kubelet[2580]: E0430 03:29:22.748830 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.748991 kubelet[2580]: W0430 03:29:22.748845 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.748991 kubelet[2580]: E0430 03:29:22.748861 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.749132 kubelet[2580]: E0430 03:29:22.749100 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.749132 kubelet[2580]: W0430 03:29:22.749127 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.749246 kubelet[2580]: E0430 03:29:22.749155 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.749442 kubelet[2580]: E0430 03:29:22.749423 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.749442 kubelet[2580]: W0430 03:29:22.749436 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.749531 kubelet[2580]: E0430 03:29:22.749452 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.749733 kubelet[2580]: E0430 03:29:22.749701 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.749733 kubelet[2580]: W0430 03:29:22.749726 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.749807 kubelet[2580]: E0430 03:29:22.749762 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.750016 kubelet[2580]: E0430 03:29:22.749956 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.750016 kubelet[2580]: W0430 03:29:22.749971 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.750016 kubelet[2580]: E0430 03:29:22.749987 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.750191 kubelet[2580]: E0430 03:29:22.750178 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.750191 kubelet[2580]: W0430 03:29:22.750190 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.750237 kubelet[2580]: E0430 03:29:22.750205 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.750454 kubelet[2580]: E0430 03:29:22.750439 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.750454 kubelet[2580]: W0430 03:29:22.750450 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.750521 kubelet[2580]: E0430 03:29:22.750463 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.750792 kubelet[2580]: E0430 03:29:22.750772 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.750792 kubelet[2580]: W0430 03:29:22.750790 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.750945 kubelet[2580]: E0430 03:29:22.750929 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.751393 kubelet[2580]: E0430 03:29:22.751143 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.751393 kubelet[2580]: W0430 03:29:22.751159 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.751393 kubelet[2580]: E0430 03:29:22.751170 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.751575 kubelet[2580]: E0430 03:29:22.751547 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.751575 kubelet[2580]: W0430 03:29:22.751561 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.751575 kubelet[2580]: E0430 03:29:22.751581 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:22.751858 kubelet[2580]: E0430 03:29:22.751834 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:22.751858 kubelet[2580]: W0430 03:29:22.751853 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:22.751988 kubelet[2580]: E0430 03:29:22.751865 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.547351 kubelet[2580]: I0430 03:29:23.547315 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:23.547970 kubelet[2580]: E0430 03:29:23.547915 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:23.553145 kubelet[2580]: E0430 03:29:23.553115 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.553145 kubelet[2580]: W0430 03:29:23.553140 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.553319 kubelet[2580]: E0430 03:29:23.553170 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.553514 kubelet[2580]: E0430 03:29:23.553483 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.553514 kubelet[2580]: W0430 03:29:23.553499 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.553514 kubelet[2580]: E0430 03:29:23.553512 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.553788 kubelet[2580]: E0430 03:29:23.553771 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.553788 kubelet[2580]: W0430 03:29:23.553786 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.553847 kubelet[2580]: E0430 03:29:23.553797 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.554074 kubelet[2580]: E0430 03:29:23.554056 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.554074 kubelet[2580]: W0430 03:29:23.554070 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.554151 kubelet[2580]: E0430 03:29:23.554082 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.554384 kubelet[2580]: E0430 03:29:23.554367 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.554384 kubelet[2580]: W0430 03:29:23.554381 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.554463 kubelet[2580]: E0430 03:29:23.554392 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.554690 kubelet[2580]: E0430 03:29:23.554662 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.554690 kubelet[2580]: W0430 03:29:23.554683 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.554818 kubelet[2580]: E0430 03:29:23.554709 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.555011 kubelet[2580]: E0430 03:29:23.554991 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.555011 kubelet[2580]: W0430 03:29:23.555004 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.555011 kubelet[2580]: E0430 03:29:23.555013 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.555249 kubelet[2580]: E0430 03:29:23.555233 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.555249 kubelet[2580]: W0430 03:29:23.555244 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.555249 kubelet[2580]: E0430 03:29:23.555253 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.555536 kubelet[2580]: E0430 03:29:23.555512 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.555536 kubelet[2580]: W0430 03:29:23.555523 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.555536 kubelet[2580]: E0430 03:29:23.555531 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.555784 kubelet[2580]: E0430 03:29:23.555768 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.555784 kubelet[2580]: W0430 03:29:23.555779 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.555784 kubelet[2580]: E0430 03:29:23.555788 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.556022 kubelet[2580]: E0430 03:29:23.556005 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.556022 kubelet[2580]: W0430 03:29:23.556016 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.556022 kubelet[2580]: E0430 03:29:23.556024 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.556260 kubelet[2580]: E0430 03:29:23.556243 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.556260 kubelet[2580]: W0430 03:29:23.556253 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.556260 kubelet[2580]: E0430 03:29:23.556261 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.556479 kubelet[2580]: E0430 03:29:23.556461 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.556479 kubelet[2580]: W0430 03:29:23.556474 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.556479 kubelet[2580]: E0430 03:29:23.556482 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.556700 kubelet[2580]: E0430 03:29:23.556686 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.556700 kubelet[2580]: W0430 03:29:23.556697 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.556747 kubelet[2580]: E0430 03:29:23.556704 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.556938 kubelet[2580]: E0430 03:29:23.556924 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.556938 kubelet[2580]: W0430 03:29:23.556935 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.556993 kubelet[2580]: E0430 03:29:23.556943 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.653711 kubelet[2580]: E0430 03:29:23.653663 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.653711 kubelet[2580]: W0430 03:29:23.653693 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.653711 kubelet[2580]: E0430 03:29:23.653715 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.654034 kubelet[2580]: E0430 03:29:23.654017 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.654034 kubelet[2580]: W0430 03:29:23.654030 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.654115 kubelet[2580]: E0430 03:29:23.654051 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.654368 kubelet[2580]: E0430 03:29:23.654338 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.654368 kubelet[2580]: W0430 03:29:23.654354 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.654453 kubelet[2580]: E0430 03:29:23.654373 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.654718 kubelet[2580]: E0430 03:29:23.654688 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.654766 kubelet[2580]: W0430 03:29:23.654719 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.654766 kubelet[2580]: E0430 03:29:23.654753 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.655248 kubelet[2580]: E0430 03:29:23.655216 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.655248 kubelet[2580]: W0430 03:29:23.655232 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.655367 kubelet[2580]: E0430 03:29:23.655252 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.655583 kubelet[2580]: E0430 03:29:23.655560 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.655583 kubelet[2580]: W0430 03:29:23.655573 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.655659 kubelet[2580]: E0430 03:29:23.655615 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.655815 kubelet[2580]: E0430 03:29:23.655795 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.655815 kubelet[2580]: W0430 03:29:23.655807 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.655887 kubelet[2580]: E0430 03:29:23.655846 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.656067 kubelet[2580]: E0430 03:29:23.656046 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.656067 kubelet[2580]: W0430 03:29:23.656057 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.656135 kubelet[2580]: E0430 03:29:23.656099 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.656323 kubelet[2580]: E0430 03:29:23.656290 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.656323 kubelet[2580]: W0430 03:29:23.656304 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.656323 kubelet[2580]: E0430 03:29:23.656320 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.656608 kubelet[2580]: E0430 03:29:23.656579 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.656608 kubelet[2580]: W0430 03:29:23.656595 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.656688 kubelet[2580]: E0430 03:29:23.656612 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.656857 kubelet[2580]: E0430 03:29:23.656839 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.656916 kubelet[2580]: W0430 03:29:23.656856 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.656916 kubelet[2580]: E0430 03:29:23.656873 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.657145 kubelet[2580]: E0430 03:29:23.657129 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.657145 kubelet[2580]: W0430 03:29:23.657143 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.657211 kubelet[2580]: E0430 03:29:23.657159 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.657402 kubelet[2580]: E0430 03:29:23.657387 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.657402 kubelet[2580]: W0430 03:29:23.657400 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.657468 kubelet[2580]: E0430 03:29:23.657415 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.657676 kubelet[2580]: E0430 03:29:23.657659 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.657676 kubelet[2580]: W0430 03:29:23.657674 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.657754 kubelet[2580]: E0430 03:29:23.657690 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.657952 kubelet[2580]: E0430 03:29:23.657939 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.657952 kubelet[2580]: W0430 03:29:23.657950 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.658021 kubelet[2580]: E0430 03:29:23.657964 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.658230 kubelet[2580]: E0430 03:29:23.658214 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.658230 kubelet[2580]: W0430 03:29:23.658228 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.658395 kubelet[2580]: E0430 03:29:23.658243 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.658523 kubelet[2580]: E0430 03:29:23.658508 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.658554 kubelet[2580]: W0430 03:29:23.658522 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.658554 kubelet[2580]: E0430 03:29:23.658537 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.658774 kubelet[2580]: E0430 03:29:23.658760 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 03:29:23.658774 kubelet[2580]: W0430 03:29:23.658772 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 03:29:23.658848 kubelet[2580]: E0430 03:29:23.658781 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 03:29:23.739238 containerd[1458]: time="2025-04-30T03:29:23.738221373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.739944 containerd[1458]: time="2025-04-30T03:29:23.739864001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 03:29:23.740820 containerd[1458]: time="2025-04-30T03:29:23.740776604Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.743830 containerd[1458]: time="2025-04-30T03:29:23.743776575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:23.744713 containerd[1458]: time="2025-04-30T03:29:23.744538290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.320200415s" Apr 30 03:29:23.744713 containerd[1458]: time="2025-04-30T03:29:23.744590831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 03:29:23.747742 containerd[1458]: time="2025-04-30T03:29:23.747672207Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 03:29:23.771975 containerd[1458]: time="2025-04-30T03:29:23.771926330Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db\"" Apr 30 03:29:23.772530 containerd[1458]: time="2025-04-30T03:29:23.772500367Z" level=info msg="StartContainer for \"628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db\"" Apr 30 03:29:23.804454 systemd[1]: run-containerd-runc-k8s.io-628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db-runc.ngpaV7.mount: Deactivated successfully. Apr 30 03:29:23.816254 systemd[1]: Started cri-containerd-628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db.scope - libcontainer container 628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db. Apr 30 03:29:23.865666 systemd[1]: cri-containerd-628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db.scope: Deactivated successfully. Apr 30 03:29:23.923566 containerd[1458]: time="2025-04-30T03:29:23.923488171Z" level=info msg="StartContainer for \"628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db\" returns successfully" Apr 30 03:29:24.347868 containerd[1458]: time="2025-04-30T03:29:24.344854779Z" level=info msg="shim disconnected" id=628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db namespace=k8s.io Apr 30 03:29:24.347868 containerd[1458]: time="2025-04-30T03:29:24.347851270Z" level=warning msg="cleaning up after shim disconnected" id=628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db namespace=k8s.io Apr 30 03:29:24.347868 containerd[1458]: time="2025-04-30T03:29:24.347867562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:24.466854 kubelet[2580]: E0430 03:29:24.466735 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:24.551162 kubelet[2580]: E0430 03:29:24.551116 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:24.553473 containerd[1458]: time="2025-04-30T03:29:24.553428860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 03:29:24.766919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-628987fc62428346218bcce1438635546e2d458bde324193e607887ab26a13db-rootfs.mount: Deactivated successfully. Apr 30 03:29:25.978374 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:44176.service - OpenSSH per-connection server daemon (10.0.0.1:44176). Apr 30 03:29:26.023750 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 44176 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:26.026016 sshd[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:26.039677 systemd-logind[1443]: New session 9 of user core. Apr 30 03:29:26.053159 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:29:26.185293 sshd[3350]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:26.190982 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:44176.service: Deactivated successfully. Apr 30 03:29:26.193566 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:29:26.194333 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:29:26.195645 systemd-logind[1443]: Removed session 9. Apr 30 03:29:26.467326 kubelet[2580]: E0430 03:29:26.467246 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:28.469199 kubelet[2580]: E0430 03:29:28.469115 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:30.316881 containerd[1458]: time="2025-04-30T03:29:30.316821362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.317838 containerd[1458]: time="2025-04-30T03:29:30.317754309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 03:29:30.321380 containerd[1458]: time="2025-04-30T03:29:30.321343356Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.324196 containerd[1458]: time="2025-04-30T03:29:30.324169329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:30.324874 containerd[1458]: time="2025-04-30T03:29:30.324828765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.771351091s" Apr 30 03:29:30.324954 containerd[1458]: time="2025-04-30T03:29:30.324876797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 03:29:30.327463 containerd[1458]: time="2025-04-30T03:29:30.327413469Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 03:29:30.460577 containerd[1458]: time="2025-04-30T03:29:30.460508274Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac\"" Apr 30 03:29:30.461240 containerd[1458]: time="2025-04-30T03:29:30.461199761Z" level=info msg="StartContainer for \"e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac\"" Apr 30 03:29:30.466656 kubelet[2580]: E0430 03:29:30.466596 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:30.503093 systemd[1]: Started cri-containerd-e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac.scope - libcontainer container e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac. Apr 30 03:29:30.535308 containerd[1458]: time="2025-04-30T03:29:30.535252049Z" level=info msg="StartContainer for \"e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac\" returns successfully" Apr 30 03:29:30.564457 kubelet[2580]: E0430 03:29:30.564406 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:31.198428 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:39882.service - OpenSSH per-connection server daemon (10.0.0.1:39882). Apr 30 03:29:31.426628 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 39882 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:31.428184 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:31.434105 systemd-logind[1443]: New session 10 of user core. Apr 30 03:29:31.440122 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:29:31.567675 kubelet[2580]: E0430 03:29:31.567214 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:31.576125 sshd[3410]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:31.582812 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:39882.service: Deactivated successfully. Apr 30 03:29:31.583110 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:29:31.585642 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:29:31.587037 systemd-logind[1443]: Removed session 10. Apr 30 03:29:32.546573 kubelet[2580]: E0430 03:29:32.546017 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:33.036022 systemd[1]: cri-containerd-e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac.scope: Deactivated successfully. Apr 30 03:29:33.064677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac-rootfs.mount: Deactivated successfully. Apr 30 03:29:33.070608 kubelet[2580]: I0430 03:29:33.070560 2580 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:29:33.138938 kubelet[2580]: I0430 03:29:33.138801 2580 topology_manager.go:215] "Topology Admit Handler" podUID="6404c18c-30e9-4c84-a61e-d9e404ad3990" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x4ngb" Apr 30 03:29:33.146617 systemd[1]: Created slice kubepods-burstable-pod6404c18c_30e9_4c84_a61e_d9e404ad3990.slice - libcontainer container kubepods-burstable-pod6404c18c_30e9_4c84_a61e_d9e404ad3990.slice. Apr 30 03:29:33.150424 kubelet[2580]: I0430 03:29:33.149502 2580 topology_manager.go:215] "Topology Admit Handler" podUID="3151b42b-2e50-48c5-ab72-09ea525d3e59" podNamespace="calico-apiserver" podName="calico-apiserver-86f865c57f-s7xvb" Apr 30 03:29:33.151682 kubelet[2580]: I0430 03:29:33.151661 2580 topology_manager.go:215] "Topology Admit Handler" podUID="7124ff7f-f649-4e24-b218-0ed2909fc6b0" podNamespace="calico-system" podName="calico-kube-controllers-dd49c77ff-998x6" Apr 30 03:29:33.153097 kubelet[2580]: I0430 03:29:33.152332 2580 topology_manager.go:215] "Topology Admit Handler" podUID="6ebed35a-bf55-4abf-96db-dbfd8e36485d" podNamespace="calico-apiserver" podName="calico-apiserver-86f865c57f-769hr" Apr 30 03:29:33.153097 kubelet[2580]: I0430 03:29:33.152433 2580 topology_manager.go:215] "Topology Admit Handler" podUID="87791251-4897-454d-aa64-599ddb0cfbb3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w9lq9" Apr 30 03:29:33.161960 systemd[1]: Created slice kubepods-burstable-pod87791251_4897_454d_aa64_599ddb0cfbb3.slice - libcontainer container kubepods-burstable-pod87791251_4897_454d_aa64_599ddb0cfbb3.slice. Apr 30 03:29:33.168241 systemd[1]: Created slice kubepods-besteffort-pod7124ff7f_f649_4e24_b218_0ed2909fc6b0.slice - libcontainer container kubepods-besteffort-pod7124ff7f_f649_4e24_b218_0ed2909fc6b0.slice. Apr 30 03:29:33.174292 systemd[1]: Created slice kubepods-besteffort-pod6ebed35a_bf55_4abf_96db_dbfd8e36485d.slice - libcontainer container kubepods-besteffort-pod6ebed35a_bf55_4abf_96db_dbfd8e36485d.slice. Apr 30 03:29:33.178927 systemd[1]: Created slice kubepods-besteffort-pod3151b42b_2e50_48c5_ab72_09ea525d3e59.slice - libcontainer container kubepods-besteffort-pod3151b42b_2e50_48c5_ab72_09ea525d3e59.slice. Apr 30 03:29:33.259930 kubelet[2580]: I0430 03:29:33.259829 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj75f\" (UniqueName: \"kubernetes.io/projected/6404c18c-30e9-4c84-a61e-d9e404ad3990-kube-api-access-pj75f\") pod \"coredns-7db6d8ff4d-x4ngb\" (UID: \"6404c18c-30e9-4c84-a61e-d9e404ad3990\") " pod="kube-system/coredns-7db6d8ff4d-x4ngb" Apr 30 03:29:33.259930 kubelet[2580]: I0430 03:29:33.259922 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6404c18c-30e9-4c84-a61e-d9e404ad3990-config-volume\") pod \"coredns-7db6d8ff4d-x4ngb\" (UID: \"6404c18c-30e9-4c84-a61e-d9e404ad3990\") " pod="kube-system/coredns-7db6d8ff4d-x4ngb" Apr 30 03:29:33.361141 kubelet[2580]: I0430 03:29:33.361067 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fb5m\" (UniqueName: \"kubernetes.io/projected/6ebed35a-bf55-4abf-96db-dbfd8e36485d-kube-api-access-7fb5m\") pod \"calico-apiserver-86f865c57f-769hr\" (UID: \"6ebed35a-bf55-4abf-96db-dbfd8e36485d\") " pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" Apr 30 03:29:33.361141 kubelet[2580]: I0430 03:29:33.361128 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7124ff7f-f649-4e24-b218-0ed2909fc6b0-tigera-ca-bundle\") pod \"calico-kube-controllers-dd49c77ff-998x6\" (UID: \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\") " pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" Apr 30 03:29:33.361141 kubelet[2580]: I0430 03:29:33.361146 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxt5l\" (UniqueName: \"kubernetes.io/projected/3151b42b-2e50-48c5-ab72-09ea525d3e59-kube-api-access-fxt5l\") pod \"calico-apiserver-86f865c57f-s7xvb\" (UID: \"3151b42b-2e50-48c5-ab72-09ea525d3e59\") " pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" Apr 30 03:29:33.361141 kubelet[2580]: I0430 03:29:33.361165 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4c6\" (UniqueName: \"kubernetes.io/projected/87791251-4897-454d-aa64-599ddb0cfbb3-kube-api-access-kv4c6\") pod \"coredns-7db6d8ff4d-w9lq9\" (UID: \"87791251-4897-454d-aa64-599ddb0cfbb3\") " pod="kube-system/coredns-7db6d8ff4d-w9lq9" Apr 30 03:29:33.361441 kubelet[2580]: I0430 03:29:33.361327 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6ebed35a-bf55-4abf-96db-dbfd8e36485d-calico-apiserver-certs\") pod \"calico-apiserver-86f865c57f-769hr\" (UID: \"6ebed35a-bf55-4abf-96db-dbfd8e36485d\") " pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" Apr 30 03:29:33.361441 kubelet[2580]: I0430 03:29:33.361377 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f69d\" (UniqueName: \"kubernetes.io/projected/7124ff7f-f649-4e24-b218-0ed2909fc6b0-kube-api-access-5f69d\") pod \"calico-kube-controllers-dd49c77ff-998x6\" (UID: \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\") " pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" Apr 30 03:29:33.361441 kubelet[2580]: I0430 03:29:33.361412 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87791251-4897-454d-aa64-599ddb0cfbb3-config-volume\") pod \"coredns-7db6d8ff4d-w9lq9\" (UID: \"87791251-4897-454d-aa64-599ddb0cfbb3\") " pod="kube-system/coredns-7db6d8ff4d-w9lq9" Apr 30 03:29:33.361441 kubelet[2580]: I0430 03:29:33.361434 2580 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3151b42b-2e50-48c5-ab72-09ea525d3e59-calico-apiserver-certs\") pod \"calico-apiserver-86f865c57f-s7xvb\" (UID: \"3151b42b-2e50-48c5-ab72-09ea525d3e59\") " pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" Apr 30 03:29:33.452444 kubelet[2580]: E0430 03:29:33.452390 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:33.453093 containerd[1458]: time="2025-04-30T03:29:33.453040681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4ngb,Uid:6404c18c-30e9-4c84-a61e-d9e404ad3990,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:33.532317 containerd[1458]: time="2025-04-30T03:29:33.532226699Z" level=info msg="shim disconnected" id=e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac namespace=k8s.io Apr 30 03:29:33.532317 containerd[1458]: time="2025-04-30T03:29:33.532303245Z" level=warning msg="cleaning up after shim disconnected" id=e347d49e169f960528049fa1394ce7a9d4d0a0afeb1961049377ca6bba2259ac namespace=k8s.io Apr 30 03:29:33.532317 containerd[1458]: time="2025-04-30T03:29:33.532314106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:29:33.572831 kubelet[2580]: E0430 03:29:33.572396 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:33.573653 containerd[1458]: time="2025-04-30T03:29:33.573617711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 03:29:33.629870 containerd[1458]: time="2025-04-30T03:29:33.629737947Z" level=error msg="Failed to destroy network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.630231 containerd[1458]: time="2025-04-30T03:29:33.630198504Z" level=error msg="encountered an error cleaning up failed sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.630305 containerd[1458]: time="2025-04-30T03:29:33.630278356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4ngb,Uid:6404c18c-30e9-4c84-a61e-d9e404ad3990,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.630654 kubelet[2580]: E0430 03:29:33.630590 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:33.630734 kubelet[2580]: E0430 03:29:33.630689 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x4ngb" Apr 30 03:29:33.630734 kubelet[2580]: E0430 03:29:33.630716 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x4ngb" Apr 30 03:29:33.630807 kubelet[2580]: E0430 03:29:33.630769 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x4ngb_kube-system(6404c18c-30e9-4c84-a61e-d9e404ad3990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x4ngb_kube-system(6404c18c-30e9-4c84-a61e-d9e404ad3990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x4ngb" podUID="6404c18c-30e9-4c84-a61e-d9e404ad3990" Apr 30 03:29:33.766009 kubelet[2580]: E0430 03:29:33.765944 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:33.766655 containerd[1458]: time="2025-04-30T03:29:33.766611124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9lq9,Uid:87791251-4897-454d-aa64-599ddb0cfbb3,Namespace:kube-system,Attempt:0,}" Apr 30 03:29:33.771386 containerd[1458]: time="2025-04-30T03:29:33.771335957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd49c77ff-998x6,Uid:7124ff7f-f649-4e24-b218-0ed2909fc6b0,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:33.777156 containerd[1458]: time="2025-04-30T03:29:33.777100881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-769hr,Uid:6ebed35a-bf55-4abf-96db-dbfd8e36485d,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:33.781595 containerd[1458]: time="2025-04-30T03:29:33.781567602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-s7xvb,Uid:3151b42b-2e50-48c5-ab72-09ea525d3e59,Namespace:calico-apiserver,Attempt:0,}" Apr 30 03:29:34.077098 containerd[1458]: time="2025-04-30T03:29:34.077039481Z" level=error msg="Failed to destroy network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.077857 containerd[1458]: time="2025-04-30T03:29:34.077830165Z" level=error msg="encountered an error cleaning up failed sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.078167 containerd[1458]: time="2025-04-30T03:29:34.078143822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-769hr,Uid:6ebed35a-bf55-4abf-96db-dbfd8e36485d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.078955 kubelet[2580]: E0430 03:29:34.078880 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.079309 kubelet[2580]: E0430 03:29:34.078985 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" Apr 30 03:29:34.079309 kubelet[2580]: E0430 03:29:34.079008 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" Apr 30 03:29:34.079309 kubelet[2580]: E0430 03:29:34.079048 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f865c57f-769hr_calico-apiserver(6ebed35a-bf55-4abf-96db-dbfd8e36485d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f865c57f-769hr_calico-apiserver(6ebed35a-bf55-4abf-96db-dbfd8e36485d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" podUID="6ebed35a-bf55-4abf-96db-dbfd8e36485d" Apr 30 03:29:34.082280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306-shm.mount: Deactivated successfully. Apr 30 03:29:34.093761 containerd[1458]: time="2025-04-30T03:29:34.093668413Z" level=error msg="Failed to destroy network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.095559 containerd[1458]: time="2025-04-30T03:29:34.095423814Z" level=error msg="encountered an error cleaning up failed sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.095559 containerd[1458]: time="2025-04-30T03:29:34.095508064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9lq9,Uid:87791251-4897-454d-aa64-599ddb0cfbb3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.098074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8-shm.mount: Deactivated successfully. Apr 30 03:29:34.098378 kubelet[2580]: E0430 03:29:34.098053 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.098378 kubelet[2580]: E0430 03:29:34.098133 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w9lq9" Apr 30 03:29:34.098378 kubelet[2580]: E0430 03:29:34.098156 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w9lq9" Apr 30 03:29:34.098482 kubelet[2580]: E0430 03:29:34.098200 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w9lq9_kube-system(87791251-4897-454d-aa64-599ddb0cfbb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w9lq9_kube-system(87791251-4897-454d-aa64-599ddb0cfbb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w9lq9" podUID="87791251-4897-454d-aa64-599ddb0cfbb3" Apr 30 03:29:34.102054 containerd[1458]: time="2025-04-30T03:29:34.101997072Z" level=error msg="Failed to destroy network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.102560 containerd[1458]: time="2025-04-30T03:29:34.102521430Z" level=error msg="encountered an error cleaning up failed sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.102612 containerd[1458]: time="2025-04-30T03:29:34.102577387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd49c77ff-998x6,Uid:7124ff7f-f649-4e24-b218-0ed2909fc6b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.102879 kubelet[2580]: E0430 03:29:34.102815 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.102977 kubelet[2580]: E0430 03:29:34.102952 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" Apr 30 03:29:34.103018 kubelet[2580]: E0430 03:29:34.102981 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" Apr 30 03:29:34.103113 kubelet[2580]: E0430 03:29:34.103034 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dd49c77ff-998x6_calico-system(7124ff7f-f649-4e24-b218-0ed2909fc6b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dd49c77ff-998x6_calico-system(7124ff7f-f649-4e24-b218-0ed2909fc6b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" podUID="7124ff7f-f649-4e24-b218-0ed2909fc6b0" Apr 30 03:29:34.104383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857-shm.mount: Deactivated successfully. Apr 30 03:29:34.114994 containerd[1458]: time="2025-04-30T03:29:34.114930623Z" level=error msg="Failed to destroy network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.115453 containerd[1458]: time="2025-04-30T03:29:34.115418932Z" level=error msg="encountered an error cleaning up failed sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.115510 containerd[1458]: time="2025-04-30T03:29:34.115470830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-s7xvb,Uid:3151b42b-2e50-48c5-ab72-09ea525d3e59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.115751 kubelet[2580]: E0430 03:29:34.115698 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.115833 kubelet[2580]: E0430 03:29:34.115773 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" Apr 30 03:29:34.115833 kubelet[2580]: E0430 03:29:34.115798 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" Apr 30 03:29:34.115916 kubelet[2580]: E0430 03:29:34.115849 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f865c57f-s7xvb_calico-apiserver(3151b42b-2e50-48c5-ab72-09ea525d3e59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f865c57f-s7xvb_calico-apiserver(3151b42b-2e50-48c5-ab72-09ea525d3e59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" podUID="3151b42b-2e50-48c5-ab72-09ea525d3e59" Apr 30 03:29:34.117995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6-shm.mount: Deactivated successfully. Apr 30 03:29:34.476045 systemd[1]: Created slice kubepods-besteffort-pode2df212b_6d14_4f4c_afa3_02f09ab15590.slice - libcontainer container kubepods-besteffort-pode2df212b_6d14_4f4c_afa3_02f09ab15590.slice. Apr 30 03:29:34.478738 containerd[1458]: time="2025-04-30T03:29:34.478695018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwhp2,Uid:e2df212b-6d14-4f4c-afa3-02f09ab15590,Namespace:calico-system,Attempt:0,}" Apr 30 03:29:34.538106 containerd[1458]: time="2025-04-30T03:29:34.538020687Z" level=error msg="Failed to destroy network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.538507 containerd[1458]: time="2025-04-30T03:29:34.538467979Z" level=error msg="encountered an error cleaning up failed sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.538566 containerd[1458]: time="2025-04-30T03:29:34.538536889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwhp2,Uid:e2df212b-6d14-4f4c-afa3-02f09ab15590,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.538879 kubelet[2580]: E0430 03:29:34.538822 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.538979 kubelet[2580]: E0430 03:29:34.538913 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:34.538979 kubelet[2580]: E0430 03:29:34.538937 2580 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nwhp2" Apr 30 03:29:34.539051 kubelet[2580]: E0430 03:29:34.538986 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nwhp2_calico-system(e2df212b-6d14-4f4c-afa3-02f09ab15590)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nwhp2_calico-system(e2df212b-6d14-4f4c-afa3-02f09ab15590)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:34.575068 kubelet[2580]: I0430 03:29:34.574506 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:34.575333 containerd[1458]: time="2025-04-30T03:29:34.575299623Z" level=info msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" Apr 30 03:29:34.575508 containerd[1458]: time="2025-04-30T03:29:34.575486340Z" level=info msg="Ensure that sandbox 5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b in task-service has been cleanup successfully" Apr 30 03:29:34.576860 kubelet[2580]: I0430 03:29:34.575874 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:34.576941 containerd[1458]: time="2025-04-30T03:29:34.576261464Z" level=info msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" Apr 30 03:29:34.576941 containerd[1458]: time="2025-04-30T03:29:34.576400238Z" level=info msg="Ensure that sandbox 201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6 in task-service has been cleanup successfully" Apr 30 03:29:34.578111 kubelet[2580]: I0430 03:29:34.578079 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:34.579454 containerd[1458]: time="2025-04-30T03:29:34.578598952Z" level=info msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" Apr 30 03:29:34.579454 containerd[1458]: time="2025-04-30T03:29:34.578841884Z" level=info msg="Ensure that sandbox 9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306 in task-service has been cleanup successfully" Apr 30 03:29:34.579576 kubelet[2580]: I0430 03:29:34.579552 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:34.580187 containerd[1458]: time="2025-04-30T03:29:34.580151596Z" level=info msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\"" Apr 30 03:29:34.580335 containerd[1458]: time="2025-04-30T03:29:34.580314717Z" level=info msg="Ensure that sandbox 5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857 in task-service has been cleanup successfully" Apr 30 03:29:34.599745 kubelet[2580]: I0430 03:29:34.599387 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:34.601122 containerd[1458]: time="2025-04-30T03:29:34.601077125Z" level=info msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" Apr 30 03:29:34.601334 containerd[1458]: time="2025-04-30T03:29:34.601299709Z" level=info msg="Ensure that sandbox 1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8 in task-service has been cleanup successfully" Apr 30 03:29:34.604688 kubelet[2580]: I0430 03:29:34.604646 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:34.607214 containerd[1458]: time="2025-04-30T03:29:34.607166562Z" level=info msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" Apr 30 03:29:34.607786 containerd[1458]: time="2025-04-30T03:29:34.607743039Z" level=info msg="Ensure that sandbox 6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f in task-service has been cleanup successfully" Apr 30 03:29:34.637941 containerd[1458]: time="2025-04-30T03:29:34.637849468Z" level=error msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" failed" error="failed to destroy network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.638400 kubelet[2580]: E0430 03:29:34.638245 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:34.638400 kubelet[2580]: E0430 03:29:34.638318 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b"} Apr 30 03:29:34.638487 kubelet[2580]: E0430 03:29:34.638406 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2df212b-6d14-4f4c-afa3-02f09ab15590\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.638487 kubelet[2580]: E0430 03:29:34.638438 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2df212b-6d14-4f4c-afa3-02f09ab15590\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nwhp2" podUID="e2df212b-6d14-4f4c-afa3-02f09ab15590" Apr 30 03:29:34.651978 containerd[1458]: time="2025-04-30T03:29:34.651912889Z" level=error msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" failed" error="failed to destroy network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.652291 kubelet[2580]: E0430 03:29:34.652182 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:34.652291 kubelet[2580]: E0430 03:29:34.652254 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306"} Apr 30 03:29:34.652388 kubelet[2580]: E0430 03:29:34.652306 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ebed35a-bf55-4abf-96db-dbfd8e36485d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.652388 kubelet[2580]: E0430 03:29:34.652340 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ebed35a-bf55-4abf-96db-dbfd8e36485d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" podUID="6ebed35a-bf55-4abf-96db-dbfd8e36485d" Apr 30 03:29:34.655780 containerd[1458]: time="2025-04-30T03:29:34.655700126Z" level=error msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" failed" error="failed to destroy network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.656145 kubelet[2580]: E0430 03:29:34.656087 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:34.656201 kubelet[2580]: E0430 03:29:34.656164 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6"} Apr 30 03:29:34.656266 kubelet[2580]: E0430 03:29:34.656224 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3151b42b-2e50-48c5-ab72-09ea525d3e59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.656354 kubelet[2580]: E0430 03:29:34.656264 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3151b42b-2e50-48c5-ab72-09ea525d3e59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" podUID="3151b42b-2e50-48c5-ab72-09ea525d3e59" Apr 30 03:29:34.659166 containerd[1458]: time="2025-04-30T03:29:34.659117428Z" level=error msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" failed" error="failed to destroy network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.659424 kubelet[2580]: E0430 03:29:34.659374 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:34.659497 kubelet[2580]: E0430 03:29:34.659429 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8"} Apr 30 03:29:34.659675 kubelet[2580]: E0430 03:29:34.659648 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87791251-4897-454d-aa64-599ddb0cfbb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.659753 kubelet[2580]: E0430 03:29:34.659691 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87791251-4897-454d-aa64-599ddb0cfbb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w9lq9" podUID="87791251-4897-454d-aa64-599ddb0cfbb3" Apr 30 03:29:34.664489 containerd[1458]: time="2025-04-30T03:29:34.664431009Z" level=error msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\" failed" error="failed to destroy network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.664800 kubelet[2580]: E0430 03:29:34.664740 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:34.664865 kubelet[2580]: E0430 03:29:34.664807 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857"} Apr 30 03:29:34.664865 kubelet[2580]: E0430 03:29:34.664849 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.665000 kubelet[2580]: E0430 03:29:34.664875 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" podUID="7124ff7f-f649-4e24-b218-0ed2909fc6b0" Apr 30 03:29:34.672171 containerd[1458]: time="2025-04-30T03:29:34.672102717Z" level=error msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" failed" error="failed to destroy network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:34.672397 kubelet[2580]: E0430 03:29:34.672335 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:34.672455 kubelet[2580]: E0430 03:29:34.672409 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f"} Apr 30 03:29:34.672483 kubelet[2580]: E0430 03:29:34.672461 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6404c18c-30e9-4c84-a61e-d9e404ad3990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:34.672551 kubelet[2580]: E0430 03:29:34.672494 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6404c18c-30e9-4c84-a61e-d9e404ad3990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x4ngb" podUID="6404c18c-30e9-4c84-a61e-d9e404ad3990" Apr 30 03:29:36.594530 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:54986.service - OpenSSH per-connection server daemon (10.0.0.1:54986). Apr 30 03:29:36.987010 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 54986 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:36.988960 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:36.993735 systemd-logind[1443]: New session 11 of user core. Apr 30 03:29:37.003088 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:29:37.160161 sshd[3813]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:37.166580 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:54986.service: Deactivated successfully. Apr 30 03:29:37.170517 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:29:37.171537 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:29:37.173280 systemd-logind[1443]: Removed session 11. Apr 30 03:29:39.079712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530622995.mount: Deactivated successfully. Apr 30 03:29:41.200406 containerd[1458]: time="2025-04-30T03:29:41.200294868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.259486 containerd[1458]: time="2025-04-30T03:29:41.259402235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 03:29:41.300128 containerd[1458]: time="2025-04-30T03:29:41.300037092Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.454083 containerd[1458]: time="2025-04-30T03:29:41.453917243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:41.454654 containerd[1458]: time="2025-04-30T03:29:41.454616983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.880767992s" Apr 30 03:29:41.454714 containerd[1458]: time="2025-04-30T03:29:41.454656057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 03:29:41.462613 containerd[1458]: time="2025-04-30T03:29:41.462569063Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 03:29:42.172122 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:55002.service - OpenSSH per-connection server daemon (10.0.0.1:55002). Apr 30 03:29:42.556565 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 55002 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:42.560345 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:42.564910 systemd-logind[1443]: New session 12 of user core. Apr 30 03:29:42.573063 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:29:42.802461 sshd[3839]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:42.808082 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:55002.service: Deactivated successfully. Apr 30 03:29:42.810623 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:29:42.811299 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:29:42.812395 systemd-logind[1443]: Removed session 12. Apr 30 03:29:42.831494 containerd[1458]: time="2025-04-30T03:29:42.831411826Z" level=info msg="CreateContainer within sandbox \"8ac37e8f5ae103ff86811a33b053703c0985538db2b1e8c2d851ef66320dfbff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5334e713520b6bca183d5a594d7158639ec5f531906286ae92345e8063ddac42\"" Apr 30 03:29:42.832158 containerd[1458]: time="2025-04-30T03:29:42.832006065Z" level=info msg="StartContainer for \"5334e713520b6bca183d5a594d7158639ec5f531906286ae92345e8063ddac42\"" Apr 30 03:29:42.917158 systemd[1]: Started cri-containerd-5334e713520b6bca183d5a594d7158639ec5f531906286ae92345e8063ddac42.scope - libcontainer container 5334e713520b6bca183d5a594d7158639ec5f531906286ae92345e8063ddac42. Apr 30 03:29:43.092313 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 03:29:43.093106 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 03:29:43.097351 containerd[1458]: time="2025-04-30T03:29:43.097224865Z" level=info msg="StartContainer for \"5334e713520b6bca183d5a594d7158639ec5f531906286ae92345e8063ddac42\" returns successfully" Apr 30 03:29:43.425262 kubelet[2580]: I0430 03:29:43.425069 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:43.426235 kubelet[2580]: E0430 03:29:43.426188 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:43.623781 kubelet[2580]: E0430 03:29:43.623744 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:43.624021 kubelet[2580]: E0430 03:29:43.624000 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:44.626350 kubelet[2580]: E0430 03:29:44.626311 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:44.676411 kubelet[2580]: I0430 03:29:44.676303 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n8z9s" podStartSLOduration=4.157467132 podStartE2EDuration="26.676281569s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:29:18.936592196 +0000 UTC m=+26.549999810" lastFinishedPulling="2025-04-30 03:29:41.455406633 +0000 UTC m=+49.068814247" observedRunningTime="2025-04-30 03:29:44.674242675 +0000 UTC m=+52.287650289" watchObservedRunningTime="2025-04-30 03:29:44.676281569 +0000 UTC m=+52.289689183" Apr 30 03:29:45.467307 containerd[1458]: time="2025-04-30T03:29:45.467225830Z" level=info msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\"" Apr 30 03:29:45.541605 containerd[1458]: time="2025-04-30T03:29:45.541523095Z" level=error msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\" failed" error="failed to destroy network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 03:29:45.541857 kubelet[2580]: E0430 03:29:45.541801 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:45.541857 kubelet[2580]: E0430 03:29:45.541850 2580 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857"} Apr 30 03:29:45.541960 kubelet[2580]: E0430 03:29:45.541885 2580 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 30 03:29:45.541960 kubelet[2580]: E0430 03:29:45.541921 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7124ff7f-f649-4e24-b218-0ed2909fc6b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" podUID="7124ff7f-f649-4e24-b218-0ed2909fc6b0" Apr 30 03:29:46.467776 containerd[1458]: time="2025-04-30T03:29:46.467696094Z" level=info msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" Apr 30 03:29:46.468430 containerd[1458]: time="2025-04-30T03:29:46.467785564Z" level=info msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.563 [INFO][4025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.564 [INFO][4025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" iface="eth0" netns="/var/run/netns/cni-31cacea3-f62a-5ab8-56fa-ab3d44912352" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.564 [INFO][4025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" iface="eth0" netns="/var/run/netns/cni-31cacea3-f62a-5ab8-56fa-ab3d44912352" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.566 [INFO][4025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" iface="eth0" netns="/var/run/netns/cni-31cacea3-f62a-5ab8-56fa-ab3d44912352" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.566 [INFO][4025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.567 [INFO][4025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.619 [INFO][4043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.619 [INFO][4043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.619 [INFO][4043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.628 [WARNING][4043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.628 [INFO][4043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.630 [INFO][4043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:46.635791 containerd[1458]: 2025-04-30 03:29:46.633 [INFO][4025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:46.636754 containerd[1458]: time="2025-04-30T03:29:46.636712974Z" level=info msg="TearDown network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" successfully" Apr 30 03:29:46.636754 containerd[1458]: time="2025-04-30T03:29:46.636752970Z" level=info msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" returns successfully" Apr 30 03:29:46.637634 kubelet[2580]: E0430 03:29:46.637596 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:46.638560 containerd[1458]: time="2025-04-30T03:29:46.638283908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9lq9,Uid:87791251-4897-454d-aa64-599ddb0cfbb3,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:46.639281 systemd[1]: run-netns-cni\x2d31cacea3\x2df62a\x2d5ab8\x2d56fa\x2dab3d44912352.mount: Deactivated successfully. Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.561 [INFO][4024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.561 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" iface="eth0" netns="/var/run/netns/cni-5134b911-6193-d19e-f562-209e312d1e9b" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.563 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" iface="eth0" netns="/var/run/netns/cni-5134b911-6193-d19e-f562-209e312d1e9b" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.566 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" iface="eth0" netns="/var/run/netns/cni-5134b911-6193-d19e-f562-209e312d1e9b" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.566 [INFO][4024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.566 [INFO][4024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.619 [INFO][4044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.620 [INFO][4044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.630 [INFO][4044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.639 [WARNING][4044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.639 [INFO][4044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.640 [INFO][4044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:46.646372 containerd[1458]: 2025-04-30 03:29:46.643 [INFO][4024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:46.647232 containerd[1458]: time="2025-04-30T03:29:46.646731189Z" level=info msg="TearDown network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" successfully" Apr 30 03:29:46.647232 containerd[1458]: time="2025-04-30T03:29:46.646752459Z" level=info msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" returns successfully" Apr 30 03:29:46.647456 containerd[1458]: time="2025-04-30T03:29:46.647314888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-769hr,Uid:6ebed35a-bf55-4abf-96db-dbfd8e36485d,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:46.649001 systemd[1]: run-netns-cni\x2d5134b911\x2d6193\x2dd19e\x2df562\x2d209e312d1e9b.mount: Deactivated successfully. Apr 30 03:29:47.317471 kernel: bpftool[4224]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 03:29:47.398799 systemd-networkd[1398]: cali5ffce72975a: Link UP Apr 30 03:29:47.399083 systemd-networkd[1398]: cali5ffce72975a: Gained carrier Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.197 [INFO][4148] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.257 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0 coredns-7db6d8ff4d- kube-system 87791251-4897-454d-aa64-599ddb0cfbb3 931 0 2025-04-30 03:29:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-w9lq9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ffce72975a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.257 [INFO][4148] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.336 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" HandleID="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.353 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" HandleID="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050f20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-w9lq9", "timestamp":"2025-04-30 03:29:47.335327272 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.354 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.354 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.354 [INFO][4211] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.356 [INFO][4211] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.365 [INFO][4211] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.369 [INFO][4211] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.370 [INFO][4211] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.373 [INFO][4211] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.373 [INFO][4211] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.374 [INFO][4211] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007 Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.380 [INFO][4211] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.385 [INFO][4211] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.386 [INFO][4211] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" host="localhost" Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.386 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.426812 containerd[1458]: 2025-04-30 03:29:47.386 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" HandleID="k8s-pod-network.b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.389 [INFO][4148] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87791251-4897-454d-aa64-599ddb0cfbb3", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-w9lq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ffce72975a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.390 [INFO][4148] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.390 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ffce72975a ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.400 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.401 [INFO][4148] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87791251-4897-454d-aa64-599ddb0cfbb3", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007", Pod:"coredns-7db6d8ff4d-w9lq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ffce72975a", MAC:"7e:d2:d6:ab:95:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.429183 containerd[1458]: 2025-04-30 03:29:47.421 [INFO][4148] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w9lq9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:47.457126 systemd-networkd[1398]: cali7883731be0d: Link UP Apr 30 03:29:47.457715 systemd-networkd[1398]: cali7883731be0d: Gained carrier Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.217 [INFO][4163] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.257 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0 calico-apiserver-86f865c57f- calico-apiserver 6ebed35a-bf55-4abf-96db-dbfd8e36485d 930 0 2025-04-30 03:29:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f865c57f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f865c57f-769hr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7883731be0d [] []}} ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.257 [INFO][4163] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.355 [INFO][4209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" HandleID="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.364 [INFO][4209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" HandleID="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042db40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f865c57f-769hr", "timestamp":"2025-04-30 03:29:47.35557078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.364 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.386 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.386 [INFO][4209] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.388 [INFO][4209] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.395 [INFO][4209] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.404 [INFO][4209] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.415 [INFO][4209] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.422 [INFO][4209] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.422 [INFO][4209] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.424 [INFO][4209] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7 Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.430 [INFO][4209] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.448 [INFO][4209] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.448 [INFO][4209] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" host="localhost" Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.448 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:47.544477 containerd[1458]: 2025-04-30 03:29:47.448 [INFO][4209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" HandleID="k8s-pod-network.ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.452 [INFO][4163] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebed35a-bf55-4abf-96db-dbfd8e36485d", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f865c57f-769hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7883731be0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.452 [INFO][4163] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.452 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7883731be0d ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.457 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.458 [INFO][4163] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebed35a-bf55-4abf-96db-dbfd8e36485d", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7", Pod:"calico-apiserver-86f865c57f-769hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7883731be0d", MAC:"32:76:72:44:df:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:47.545561 containerd[1458]: 2025-04-30 03:29:47.540 [INFO][4163] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-769hr" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:47.681620 systemd-networkd[1398]: vxlan.calico: Link UP Apr 30 03:29:47.681633 systemd-networkd[1398]: vxlan.calico: Gained carrier Apr 30 03:29:47.703785 containerd[1458]: time="2025-04-30T03:29:47.702658572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:47.703785 containerd[1458]: time="2025-04-30T03:29:47.702716782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:47.703785 containerd[1458]: time="2025-04-30T03:29:47.702733494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:47.703785 containerd[1458]: time="2025-04-30T03:29:47.702819377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:47.726338 systemd[1]: Started cri-containerd-b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007.scope - libcontainer container b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007. Apr 30 03:29:47.743792 containerd[1458]: time="2025-04-30T03:29:47.740256832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:47.743792 containerd[1458]: time="2025-04-30T03:29:47.741646040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:47.743792 containerd[1458]: time="2025-04-30T03:29:47.741942823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:47.744182 containerd[1458]: time="2025-04-30T03:29:47.743996073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:47.749323 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:29:47.771080 systemd[1]: Started cri-containerd-ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7.scope - libcontainer container ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7. Apr 30 03:29:47.788947 containerd[1458]: time="2025-04-30T03:29:47.788560560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9lq9,Uid:87791251-4897-454d-aa64-599ddb0cfbb3,Namespace:kube-system,Attempt:1,} returns sandbox id \"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007\"" Apr 30 03:29:47.789578 kubelet[2580]: E0430 03:29:47.789534 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:47.792777 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:29:47.794315 containerd[1458]: time="2025-04-30T03:29:47.794163459Z" level=info msg="CreateContainer within sandbox \"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:47.817844 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Apr 30 03:29:47.837285 containerd[1458]: time="2025-04-30T03:29:47.837199905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-769hr,Uid:6ebed35a-bf55-4abf-96db-dbfd8e36485d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7\"" Apr 30 03:29:47.839761 containerd[1458]: time="2025-04-30T03:29:47.839719129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:47.877407 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:47.880145 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:47.886337 systemd-logind[1443]: New session 13 of user core. Apr 30 03:29:47.902243 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:29:47.967871 containerd[1458]: time="2025-04-30T03:29:47.967733660Z" level=info msg="CreateContainer within sandbox \"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f26790a35d3ab2be8b7a798e7d6724c98515ef2d3b5e8685dc0744051b61a815\"" Apr 30 03:29:47.969181 containerd[1458]: time="2025-04-30T03:29:47.969141042Z" level=info msg="StartContainer for \"f26790a35d3ab2be8b7a798e7d6724c98515ef2d3b5e8685dc0744051b61a815\"" Apr 30 03:29:48.022489 systemd[1]: Started cri-containerd-f26790a35d3ab2be8b7a798e7d6724c98515ef2d3b5e8685dc0744051b61a815.scope - libcontainer container f26790a35d3ab2be8b7a798e7d6724c98515ef2d3b5e8685dc0744051b61a815. Apr 30 03:29:48.080131 sshd[4377]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:48.090991 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:53142.service: Deactivated successfully. Apr 30 03:29:48.093624 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:29:48.094457 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:29:48.107356 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:53152.service - OpenSSH per-connection server daemon (10.0.0.1:53152). Apr 30 03:29:48.108628 systemd-logind[1443]: Removed session 13. Apr 30 03:29:48.144885 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 53152 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:48.146965 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:48.152240 systemd-logind[1443]: New session 14 of user core. Apr 30 03:29:48.163268 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:29:48.387657 containerd[1458]: time="2025-04-30T03:29:48.387605858Z" level=info msg="StartContainer for \"f26790a35d3ab2be8b7a798e7d6724c98515ef2d3b5e8685dc0744051b61a815\" returns successfully" Apr 30 03:29:48.467414 containerd[1458]: time="2025-04-30T03:29:48.467276239Z" level=info msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" Apr 30 03:29:48.644949 kubelet[2580]: E0430 03:29:48.643846 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:48.677106 systemd-networkd[1398]: cali5ffce72975a: Gained IPv6LL Apr 30 03:29:48.950417 sshd[4463]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:48.958267 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:53152.service: Deactivated successfully. Apr 30 03:29:48.960573 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:29:48.962170 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:29:48.968299 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:53156.service - OpenSSH per-connection server daemon (10.0.0.1:53156). Apr 30 03:29:48.969781 systemd-logind[1443]: Removed session 14. Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.621 [INFO][4491] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.621 [INFO][4491] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" iface="eth0" netns="/var/run/netns/cni-8b9b0356-ee74-964a-67c6-2461952e78f4" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.621 [INFO][4491] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" iface="eth0" netns="/var/run/netns/cni-8b9b0356-ee74-964a-67c6-2461952e78f4" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.621 [INFO][4491] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" iface="eth0" netns="/var/run/netns/cni-8b9b0356-ee74-964a-67c6-2461952e78f4" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.621 [INFO][4491] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.622 [INFO][4491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.650 [INFO][4500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.650 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.650 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.945 [WARNING][4500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.945 [INFO][4500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.963 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:48.969863 containerd[1458]: 2025-04-30 03:29:48.966 [INFO][4491] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:48.971057 containerd[1458]: time="2025-04-30T03:29:48.970090723Z" level=info msg="TearDown network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" successfully" Apr 30 03:29:48.971057 containerd[1458]: time="2025-04-30T03:29:48.970118596Z" level=info msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" returns successfully" Apr 30 03:29:48.971057 containerd[1458]: time="2025-04-30T03:29:48.970731650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwhp2,Uid:e2df212b-6d14-4f4c-afa3-02f09ab15590,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:48.973272 systemd[1]: run-netns-cni\x2d8b9b0356\x2dee74\x2d964a\x2d67c6\x2d2461952e78f4.mount: Deactivated successfully. Apr 30 03:29:49.001412 kubelet[2580]: I0430 03:29:49.001320 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w9lq9" podStartSLOduration=43.001300928 podStartE2EDuration="43.001300928s" podCreationTimestamp="2025-04-30 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:49.00113358 +0000 UTC m=+56.614541194" watchObservedRunningTime="2025-04-30 03:29:49.001300928 +0000 UTC m=+56.614708542" Apr 30 03:29:49.027774 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 53156 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:49.029662 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:49.034105 systemd-logind[1443]: New session 15 of user core. Apr 30 03:29:49.043054 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:29:49.444204 systemd-networkd[1398]: cali7883731be0d: Gained IPv6LL Apr 30 03:29:49.467947 containerd[1458]: time="2025-04-30T03:29:49.467873589Z" level=info msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" Apr 30 03:29:49.529371 sshd[4511]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:49.533445 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:53156.service: Deactivated successfully. Apr 30 03:29:49.535884 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:29:49.536606 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:29:49.537422 systemd-logind[1443]: Removed session 15. Apr 30 03:29:49.646125 kubelet[2580]: E0430 03:29:49.645886 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:49.700097 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.978 [INFO][4548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.978 [INFO][4548] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" iface="eth0" netns="/var/run/netns/cni-a1e325d8-98c0-68eb-627b-ede15104dfc9" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.978 [INFO][4548] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" iface="eth0" netns="/var/run/netns/cni-a1e325d8-98c0-68eb-627b-ede15104dfc9" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.979 [INFO][4548] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" iface="eth0" netns="/var/run/netns/cni-a1e325d8-98c0-68eb-627b-ede15104dfc9" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.979 [INFO][4548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:49.979 [INFO][4548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.021 [INFO][4575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.022 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.022 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.029 [WARNING][4575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.029 [INFO][4575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.031 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:50.036292 containerd[1458]: 2025-04-30 03:29:50.033 [INFO][4548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:50.041597 containerd[1458]: time="2025-04-30T03:29:50.041493944Z" level=info msg="TearDown network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" successfully" Apr 30 03:29:50.042831 systemd[1]: run-netns-cni\x2da1e325d8\x2d98c0\x2d68eb\x2d627b\x2dede15104dfc9.mount: Deactivated successfully. Apr 30 03:29:50.044816 kubelet[2580]: E0430 03:29:50.043595 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:50.045408 containerd[1458]: time="2025-04-30T03:29:50.042988723Z" level=info msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" returns successfully" Apr 30 03:29:50.045408 containerd[1458]: time="2025-04-30T03:29:50.044181369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4ngb,Uid:6404c18c-30e9-4c84-a61e-d9e404ad3990,Namespace:kube-system,Attempt:1,}" Apr 30 03:29:50.379603 systemd-networkd[1398]: cali5321a1f6a6d: Link UP Apr 30 03:29:50.380255 systemd-networkd[1398]: cali5321a1f6a6d: Gained carrier Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:49.916 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nwhp2-eth0 csi-node-driver- calico-system e2df212b-6d14-4f4c-afa3-02f09ab15590 950 0 2025-04-30 03:29:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nwhp2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5321a1f6a6d [] []}} ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:49.916 [INFO][4559] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.022 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" HandleID="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.031 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" HandleID="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000503d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nwhp2", "timestamp":"2025-04-30 03:29:50.022021904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.031 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.032 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.032 [INFO][4581] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.034 [INFO][4581] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.039 [INFO][4581] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.045 [INFO][4581] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.047 [INFO][4581] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.049 [INFO][4581] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.049 [INFO][4581] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.051 [INFO][4581] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3 Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.075 [INFO][4581] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.372 [INFO][4581] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.372 [INFO][4581] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" host="localhost" Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.372 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:50.400199 containerd[1458]: 2025-04-30 03:29:50.372 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" HandleID="k8s-pod-network.6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.376 [INFO][4559] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nwhp2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2df212b-6d14-4f4c-afa3-02f09ab15590", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nwhp2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5321a1f6a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.376 [INFO][4559] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.376 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5321a1f6a6d ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.380 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.380 [INFO][4559] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nwhp2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2df212b-6d14-4f4c-afa3-02f09ab15590", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3", Pod:"csi-node-driver-nwhp2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5321a1f6a6d", MAC:"be:25:94:21:9c:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:50.402994 containerd[1458]: 2025-04-30 03:29:50.395 [INFO][4559] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3" Namespace="calico-system" Pod="csi-node-driver-nwhp2" WorkloadEndpoint="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:50.468282 containerd[1458]: time="2025-04-30T03:29:50.467738656Z" level=info msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" Apr 30 03:29:50.647959 kubelet[2580]: E0430 03:29:50.647791 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:50.727451 containerd[1458]: time="2025-04-30T03:29:50.727350099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:50.756928 containerd[1458]: time="2025-04-30T03:29:50.728330612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:50.756928 containerd[1458]: time="2025-04-30T03:29:50.728377311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:50.756928 containerd[1458]: time="2025-04-30T03:29:50.728606015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.678 [INFO][4624] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.679 [INFO][4624] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" iface="eth0" netns="/var/run/netns/cni-b37f3fac-9921-6e49-0fbc-c9e9aee2d0e7" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.680 [INFO][4624] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" iface="eth0" netns="/var/run/netns/cni-b37f3fac-9921-6e49-0fbc-c9e9aee2d0e7" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.680 [INFO][4624] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" iface="eth0" netns="/var/run/netns/cni-b37f3fac-9921-6e49-0fbc-c9e9aee2d0e7" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.680 [INFO][4624] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.680 [INFO][4624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.713 [INFO][4632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.756 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.760 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.770 [WARNING][4632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.770 [INFO][4632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.773 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:50.785064 containerd[1458]: 2025-04-30 03:29:50.778 [INFO][4624] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:50.786397 containerd[1458]: time="2025-04-30T03:29:50.785336562Z" level=info msg="TearDown network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" successfully" Apr 30 03:29:50.786397 containerd[1458]: time="2025-04-30T03:29:50.785366138Z" level=info msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" returns successfully" Apr 30 03:29:50.786397 containerd[1458]: time="2025-04-30T03:29:50.786288670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-s7xvb,Uid:3151b42b-2e50-48c5-ab72-09ea525d3e59,Namespace:calico-apiserver,Attempt:1,}" Apr 30 03:29:50.791264 systemd[1]: Started cri-containerd-6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3.scope - libcontainer container 6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3. Apr 30 03:29:50.811498 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:29:50.842345 containerd[1458]: time="2025-04-30T03:29:50.842202966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nwhp2,Uid:e2df212b-6d14-4f4c-afa3-02f09ab15590,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3\"" Apr 30 03:29:50.923777 systemd-networkd[1398]: cali2b4a67e2a99: Link UP Apr 30 03:29:50.924770 systemd-networkd[1398]: cali2b4a67e2a99: Gained carrier Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.767 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0 coredns-7db6d8ff4d- kube-system 6404c18c-30e9-4c84-a61e-d9e404ad3990 978 0 2025-04-30 03:29:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-x4ngb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b4a67e2a99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.767 [INFO][4642] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.828 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" HandleID="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.841 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" HandleID="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fcd60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-x4ngb", "timestamp":"2025-04-30 03:29:50.828292161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.841 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.841 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.841 [INFO][4684] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.845 [INFO][4684] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.856 [INFO][4684] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.877 [INFO][4684] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.879 [INFO][4684] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.882 [INFO][4684] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.882 [INFO][4684] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.884 [INFO][4684] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4 Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.892 [INFO][4684] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.903 [INFO][4684] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.903 [INFO][4684] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" host="localhost" Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.903 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:50.950735 containerd[1458]: 2025-04-30 03:29:50.903 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" HandleID="k8s-pod-network.f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.908 [INFO][4642] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6404c18c-30e9-4c84-a61e-d9e404ad3990", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-x4ngb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4a67e2a99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.908 [INFO][4642] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.908 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b4a67e2a99 ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.925 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.927 [INFO][4642] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6404c18c-30e9-4c84-a61e-d9e404ad3990", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4", Pod:"coredns-7db6d8ff4d-x4ngb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4a67e2a99", MAC:"b6:48:2a:95:a6:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:50.951631 containerd[1458]: 2025-04-30 03:29:50.940 [INFO][4642] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x4ngb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:50.984433 containerd[1458]: time="2025-04-30T03:29:50.984017559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:50.984433 containerd[1458]: time="2025-04-30T03:29:50.984173716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:50.984433 containerd[1458]: time="2025-04-30T03:29:50.984225394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:50.984826 containerd[1458]: time="2025-04-30T03:29:50.984379197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:51.009702 systemd[1]: Started cri-containerd-f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4.scope - libcontainer container f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4. Apr 30 03:29:51.025775 systemd-networkd[1398]: cali73d675b91e8: Link UP Apr 30 03:29:51.026692 systemd-networkd[1398]: cali73d675b91e8: Gained carrier Apr 30 03:29:51.037941 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:29:51.046818 systemd[1]: run-netns-cni\x2db37f3fac\x2d9921\x2d6e49\x2d0fbc\x2dc9e9aee2d0e7.mount: Deactivated successfully. Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.877 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0 calico-apiserver-86f865c57f- calico-apiserver 3151b42b-2e50-48c5-ab72-09ea525d3e59 985 0 2025-04-30 03:29:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f865c57f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86f865c57f-s7xvb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73d675b91e8 [] []}} ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.877 [INFO][4698] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.948 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" HandleID="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.960 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" HandleID="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f79a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86f865c57f-s7xvb", "timestamp":"2025-04-30 03:29:50.948143843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.961 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.961 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.961 [INFO][4718] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.963 [INFO][4718] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.971 [INFO][4718] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.980 [INFO][4718] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.983 [INFO][4718] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.987 [INFO][4718] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.987 [INFO][4718] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.990 [INFO][4718] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:50.997 [INFO][4718] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:51.007 [INFO][4718] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:51.007 [INFO][4718] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" host="localhost" Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:51.007 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:51.053494 containerd[1458]: 2025-04-30 03:29:51.007 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" HandleID="k8s-pod-network.f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.020 [INFO][4698] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3151b42b-2e50-48c5-ab72-09ea525d3e59", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86f865c57f-s7xvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d675b91e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.020 [INFO][4698] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.020 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73d675b91e8 ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.029 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.029 [INFO][4698] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3151b42b-2e50-48c5-ab72-09ea525d3e59", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a", Pod:"calico-apiserver-86f865c57f-s7xvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d675b91e8", MAC:"96:9e:5a:3c:c8:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:51.055541 containerd[1458]: 2025-04-30 03:29:51.042 [INFO][4698] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a" Namespace="calico-apiserver" Pod="calico-apiserver-86f865c57f-s7xvb" WorkloadEndpoint="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:51.081725 containerd[1458]: time="2025-04-30T03:29:51.081140528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4ngb,Uid:6404c18c-30e9-4c84-a61e-d9e404ad3990,Namespace:kube-system,Attempt:1,} returns sandbox id \"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4\"" Apr 30 03:29:51.082570 kubelet[2580]: E0430 03:29:51.082520 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:51.091811 containerd[1458]: time="2025-04-30T03:29:51.091728710Z" level=info msg="CreateContainer within sandbox \"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:29:51.101843 containerd[1458]: time="2025-04-30T03:29:51.099471027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:51.101843 containerd[1458]: time="2025-04-30T03:29:51.099550388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:51.101843 containerd[1458]: time="2025-04-30T03:29:51.099571758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:51.112218 containerd[1458]: time="2025-04-30T03:29:51.108479328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:51.118943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521999602.mount: Deactivated successfully. Apr 30 03:29:51.134521 containerd[1458]: time="2025-04-30T03:29:51.134452505Z" level=info msg="CreateContainer within sandbox \"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0b4b97994df92363f9c32def15e2c3ba765e560abb039783896a74707c9c8fd\"" Apr 30 03:29:51.136994 containerd[1458]: time="2025-04-30T03:29:51.135832887Z" level=info msg="StartContainer for \"b0b4b97994df92363f9c32def15e2c3ba765e560abb039783896a74707c9c8fd\"" Apr 30 03:29:51.150429 systemd[1]: Started cri-containerd-f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a.scope - libcontainer container f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a. Apr 30 03:29:51.169573 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:29:51.200135 systemd[1]: Started cri-containerd-b0b4b97994df92363f9c32def15e2c3ba765e560abb039783896a74707c9c8fd.scope - libcontainer container b0b4b97994df92363f9c32def15e2c3ba765e560abb039783896a74707c9c8fd. Apr 30 03:29:51.224186 containerd[1458]: time="2025-04-30T03:29:51.224133324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f865c57f-s7xvb,Uid:3151b42b-2e50-48c5-ab72-09ea525d3e59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a\"" Apr 30 03:29:51.243766 containerd[1458]: time="2025-04-30T03:29:51.243586996Z" level=info msg="StartContainer for \"b0b4b97994df92363f9c32def15e2c3ba765e560abb039783896a74707c9c8fd\" returns successfully" Apr 30 03:29:51.653113 kubelet[2580]: E0430 03:29:51.653042 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:51.653247 kubelet[2580]: E0430 03:29:51.653190 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:51.785637 kubelet[2580]: I0430 03:29:51.785380 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x4ngb" podStartSLOduration=45.785361211 podStartE2EDuration="45.785361211s" podCreationTimestamp="2025-04-30 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:29:51.738151857 +0000 UTC m=+59.351559471" watchObservedRunningTime="2025-04-30 03:29:51.785361211 +0000 UTC m=+59.398768835" Apr 30 03:29:52.004317 systemd-networkd[1398]: cali5321a1f6a6d: Gained IPv6LL Apr 30 03:29:52.198461 systemd-networkd[1398]: cali73d675b91e8: Gained IPv6LL Apr 30 03:29:52.459878 containerd[1458]: time="2025-04-30T03:29:52.459813412Z" level=info msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.501 [WARNING][4900] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6404c18c-30e9-4c84-a61e-d9e404ad3990", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4", Pod:"coredns-7db6d8ff4d-x4ngb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4a67e2a99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.502 [INFO][4900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.502 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" iface="eth0" netns="" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.502 [INFO][4900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.502 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.532 [INFO][4911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.533 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.533 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.541 [WARNING][4911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.541 [INFO][4911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.542 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:52.547993 containerd[1458]: 2025-04-30 03:29:52.545 [INFO][4900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.547993 containerd[1458]: time="2025-04-30T03:29:52.547983111Z" level=info msg="TearDown network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" successfully" Apr 30 03:29:52.547993 containerd[1458]: time="2025-04-30T03:29:52.548009261Z" level=info msg="StopPodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" returns successfully" Apr 30 03:29:52.557495 containerd[1458]: time="2025-04-30T03:29:52.557441256Z" level=info msg="RemovePodSandbox for \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" Apr 30 03:29:52.559680 containerd[1458]: time="2025-04-30T03:29:52.559651053Z" level=info msg="Forcibly stopping sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\"" Apr 30 03:29:52.644174 systemd-networkd[1398]: cali2b4a67e2a99: Gained IPv6LL Apr 30 03:29:52.657000 kubelet[2580]: E0430 03:29:52.655301 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.616 [WARNING][4933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6404c18c-30e9-4c84-a61e-d9e404ad3990", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f61726862b21648eba7dccff66b71f4243a8a5cc4a66a64d1c92e5f1f0f7ddb4", Pod:"coredns-7db6d8ff4d-x4ngb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4a67e2a99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.616 [INFO][4933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.616 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" iface="eth0" netns="" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.616 [INFO][4933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.616 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.649 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.649 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.649 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.656 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.657 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" HandleID="k8s-pod-network.6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Workload="localhost-k8s-coredns--7db6d8ff4d--x4ngb-eth0" Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.658 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:52.666083 containerd[1458]: 2025-04-30 03:29:52.663 [INFO][4933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f" Apr 30 03:29:52.666640 containerd[1458]: time="2025-04-30T03:29:52.666102497Z" level=info msg="TearDown network for sandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" successfully" Apr 30 03:29:52.790911 containerd[1458]: time="2025-04-30T03:29:52.790733101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:52.790911 containerd[1458]: time="2025-04-30T03:29:52.790823373Z" level=info msg="RemovePodSandbox \"6d2df9d8108d75aec467114c75579e1ebb3dee5f90285409b1718e4b152b8a9f\" returns successfully" Apr 30 03:29:52.791461 containerd[1458]: time="2025-04-30T03:29:52.791430285Z" level=info msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.827 [WARNING][4969] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3151b42b-2e50-48c5-ab72-09ea525d3e59", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a", Pod:"calico-apiserver-86f865c57f-s7xvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d675b91e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.827 [INFO][4969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.827 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" iface="eth0" netns="" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.827 [INFO][4969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.827 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.850 [INFO][4978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.850 [INFO][4978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.850 [INFO][4978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.855 [WARNING][4978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.855 [INFO][4978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.857 [INFO][4978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:52.861955 containerd[1458]: 2025-04-30 03:29:52.859 [INFO][4969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.862625 containerd[1458]: time="2025-04-30T03:29:52.861997231Z" level=info msg="TearDown network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" successfully" Apr 30 03:29:52.862625 containerd[1458]: time="2025-04-30T03:29:52.862025293Z" level=info msg="StopPodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" returns successfully" Apr 30 03:29:52.862625 containerd[1458]: time="2025-04-30T03:29:52.862545833Z" level=info msg="RemovePodSandbox for \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" Apr 30 03:29:52.862625 containerd[1458]: time="2025-04-30T03:29:52.862587692Z" level=info msg="Forcibly stopping sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\"" Apr 30 03:29:52.912569 containerd[1458]: time="2025-04-30T03:29:52.912501037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:52.959949 containerd[1458]: time="2025-04-30T03:29:52.959822330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.921 [WARNING][5000] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3151b42b-2e50-48c5-ab72-09ea525d3e59", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a", Pod:"calico-apiserver-86f865c57f-s7xvb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73d675b91e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.922 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.922 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" iface="eth0" netns="" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.922 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.922 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.958 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.958 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.958 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.964 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.964 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" HandleID="k8s-pod-network.201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Workload="localhost-k8s-calico--apiserver--86f865c57f--s7xvb-eth0" Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.966 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:52.971144 containerd[1458]: 2025-04-30 03:29:52.968 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6" Apr 30 03:29:52.971707 containerd[1458]: time="2025-04-30T03:29:52.971186555Z" level=info msg="TearDown network for sandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" successfully" Apr 30 03:29:52.973095 containerd[1458]: time="2025-04-30T03:29:52.973038322Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:53.014175 containerd[1458]: time="2025-04-30T03:29:53.014118028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:53.014326 containerd[1458]: time="2025-04-30T03:29:53.014197609Z" level=info msg="RemovePodSandbox \"201a5403974c5c4996efa83462a0963a390404c7caaa9dc207d41cf3dda9fab6\" returns successfully" Apr 30 03:29:53.014703 containerd[1458]: time="2025-04-30T03:29:53.014675336Z" level=info msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" Apr 30 03:29:53.032925 containerd[1458]: time="2025-04-30T03:29:53.032819518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:53.033934 containerd[1458]: time="2025-04-30T03:29:53.033825088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.193825065s" Apr 30 03:29:53.033934 containerd[1458]: time="2025-04-30T03:29:53.033887626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:53.035035 containerd[1458]: time="2025-04-30T03:29:53.034806682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 03:29:53.036609 containerd[1458]: time="2025-04-30T03:29:53.036464260Z" level=info msg="CreateContainer within sandbox \"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.052 [WARNING][5031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nwhp2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2df212b-6d14-4f4c-afa3-02f09ab15590", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3", Pod:"csi-node-driver-nwhp2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5321a1f6a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.053 [INFO][5031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.053 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" iface="eth0" netns="" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.053 [INFO][5031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.053 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.076 [INFO][5041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.076 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.076 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.098 [WARNING][5041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.098 [INFO][5041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.100 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.106319 containerd[1458]: 2025-04-30 03:29:53.103 [INFO][5031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.106788 containerd[1458]: time="2025-04-30T03:29:53.106372056Z" level=info msg="TearDown network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" successfully" Apr 30 03:29:53.106788 containerd[1458]: time="2025-04-30T03:29:53.106414216Z" level=info msg="StopPodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" returns successfully" Apr 30 03:29:53.107357 containerd[1458]: time="2025-04-30T03:29:53.107332149Z" level=info msg="RemovePodSandbox for \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" Apr 30 03:29:53.107419 containerd[1458]: time="2025-04-30T03:29:53.107361896Z" level=info msg="Forcibly stopping sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\"" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.148 [WARNING][5067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nwhp2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2df212b-6d14-4f4c-afa3-02f09ab15590", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3", Pod:"csi-node-driver-nwhp2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5321a1f6a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.148 [INFO][5067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.148 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" iface="eth0" netns="" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.148 [INFO][5067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.148 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.169 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.169 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.169 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.176 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.176 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" HandleID="k8s-pod-network.5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Workload="localhost-k8s-csi--node--driver--nwhp2-eth0" Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.177 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.183973 containerd[1458]: 2025-04-30 03:29:53.180 [INFO][5067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b" Apr 30 03:29:53.183973 containerd[1458]: time="2025-04-30T03:29:53.182583795Z" level=info msg="TearDown network for sandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" successfully" Apr 30 03:29:53.205694 containerd[1458]: time="2025-04-30T03:29:53.205630888Z" level=info msg="CreateContainer within sandbox \"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7976c803405a4ece8c3fe6f1e6286e99dd8e2451e2f6e88ab12bf2b44f218bdb\"" Apr 30 03:29:53.206345 containerd[1458]: time="2025-04-30T03:29:53.206266405Z" level=info msg="StartContainer for \"7976c803405a4ece8c3fe6f1e6286e99dd8e2451e2f6e88ab12bf2b44f218bdb\"" Apr 30 03:29:53.233227 containerd[1458]: time="2025-04-30T03:29:53.233135848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:53.233352 containerd[1458]: time="2025-04-30T03:29:53.233256047Z" level=info msg="RemovePodSandbox \"5acd1dff6d7d47f56bdc3165394251605ae5e8f60b1ecb2dace7f468efb7381b\" returns successfully" Apr 30 03:29:53.234449 containerd[1458]: time="2025-04-30T03:29:53.234114747Z" level=info msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" Apr 30 03:29:53.239155 systemd[1]: Started cri-containerd-7976c803405a4ece8c3fe6f1e6286e99dd8e2451e2f6e88ab12bf2b44f218bdb.scope - libcontainer container 7976c803405a4ece8c3fe6f1e6286e99dd8e2451e2f6e88ab12bf2b44f218bdb. Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.281 [WARNING][5118] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87791251-4897-454d-aa64-599ddb0cfbb3", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007", Pod:"coredns-7db6d8ff4d-w9lq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ffce72975a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.281 [INFO][5118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.281 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" iface="eth0" netns="" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.281 [INFO][5118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.281 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.305 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.305 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.305 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.362 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.362 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.365 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.373777 containerd[1458]: 2025-04-30 03:29:53.370 [INFO][5118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.373777 containerd[1458]: time="2025-04-30T03:29:53.373521503Z" level=info msg="TearDown network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" successfully" Apr 30 03:29:53.373777 containerd[1458]: time="2025-04-30T03:29:53.373565988Z" level=info msg="StopPodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" returns successfully" Apr 30 03:29:53.375919 containerd[1458]: time="2025-04-30T03:29:53.374191545Z" level=info msg="RemovePodSandbox for \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" Apr 30 03:29:53.375919 containerd[1458]: time="2025-04-30T03:29:53.374236741Z" level=info msg="Forcibly stopping sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\"" Apr 30 03:29:53.414735 containerd[1458]: time="2025-04-30T03:29:53.414657783Z" level=info msg="StartContainer for \"7976c803405a4ece8c3fe6f1e6286e99dd8e2451e2f6e88ab12bf2b44f218bdb\" returns successfully" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.455 [WARNING][5167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"87791251-4897-454d-aa64-599ddb0cfbb3", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b156d811148859176234cbbcb1757a6d74d6abda42126ca7ee46a11ef0e94007", Pod:"coredns-7db6d8ff4d-w9lq9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ffce72975a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.455 [INFO][5167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.455 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" iface="eth0" netns="" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.455 [INFO][5167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.455 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.479 [INFO][5176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.479 [INFO][5176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.479 [INFO][5176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.537 [WARNING][5176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.537 [INFO][5176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" HandleID="k8s-pod-network.1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Workload="localhost-k8s-coredns--7db6d8ff4d--w9lq9-eth0" Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.539 [INFO][5176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:53.546189 containerd[1458]: 2025-04-30 03:29:53.543 [INFO][5167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8" Apr 30 03:29:53.547517 containerd[1458]: time="2025-04-30T03:29:53.547446979Z" level=info msg="TearDown network for sandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" successfully" Apr 30 03:29:53.662122 kubelet[2580]: E0430 03:29:53.661972 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:29:53.867086 containerd[1458]: time="2025-04-30T03:29:53.867004666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:53.867086 containerd[1458]: time="2025-04-30T03:29:53.867095107Z" level=info msg="RemovePodSandbox \"1655ba825b18b1d78ca5be95109cc94f344d1af87e8e7f14ded767aa2604b5a8\" returns successfully" Apr 30 03:29:53.867805 containerd[1458]: time="2025-04-30T03:29:53.867756773Z" level=info msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:53.990 [WARNING][5199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebed35a-bf55-4abf-96db-dbfd8e36485d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7", Pod:"calico-apiserver-86f865c57f-769hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7883731be0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:53.991 [INFO][5199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:53.991 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" iface="eth0" netns="" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:53.991 [INFO][5199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:53.991 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.015 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.015 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.015 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.044 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.045 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.046 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:54.051882 containerd[1458]: 2025-04-30 03:29:54.049 [INFO][5199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.051882 containerd[1458]: time="2025-04-30T03:29:54.051861409Z" level=info msg="TearDown network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" successfully" Apr 30 03:29:54.052335 containerd[1458]: time="2025-04-30T03:29:54.051888780Z" level=info msg="StopPodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" returns successfully" Apr 30 03:29:54.053066 containerd[1458]: time="2025-04-30T03:29:54.052561027Z" level=info msg="RemovePodSandbox for \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" Apr 30 03:29:54.053066 containerd[1458]: time="2025-04-30T03:29:54.052604239Z" level=info msg="Forcibly stopping sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\"" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.377 [WARNING][5232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0", GenerateName:"calico-apiserver-86f865c57f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6ebed35a-bf55-4abf-96db-dbfd8e36485d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f865c57f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca4a3871e0ae1edf34e4fddab458724d592d7891057c3e6c41729b2d838750f7", Pod:"calico-apiserver-86f865c57f-769hr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7883731be0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.377 [INFO][5232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.377 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" iface="eth0" netns="" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.377 [INFO][5232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.377 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.398 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.398 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.399 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.513 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.513 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" HandleID="k8s-pod-network.9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Workload="localhost-k8s-calico--apiserver--86f865c57f--769hr-eth0" Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.515 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:54.520173 containerd[1458]: 2025-04-30 03:29:54.517 [INFO][5232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306" Apr 30 03:29:54.520784 containerd[1458]: time="2025-04-30T03:29:54.520238821Z" level=info msg="TearDown network for sandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" successfully" Apr 30 03:29:54.544421 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:53168.service - OpenSSH per-connection server daemon (10.0.0.1:53168). Apr 30 03:29:54.668768 kubelet[2580]: I0430 03:29:54.668724 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:55.402845 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 53168 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:29:55.405694 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:29:55.417068 systemd-logind[1443]: New session 16 of user core. Apr 30 03:29:55.427334 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:29:55.443562 containerd[1458]: time="2025-04-30T03:29:55.443333230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:29:55.443562 containerd[1458]: time="2025-04-30T03:29:55.443454551Z" level=info msg="RemovePodSandbox \"9dcd1aaf0c5b3309e1aef8bacb87b423372569c64c766d45ddb38aa676405306\" returns successfully" Apr 30 03:29:55.883998 sshd[5248]: pam_unix(sshd:session): session closed for user core Apr 30 03:29:55.890361 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:53168.service: Deactivated successfully. Apr 30 03:29:55.893436 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:29:55.894326 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:29:55.895714 systemd-logind[1443]: Removed session 16. Apr 30 03:29:56.972597 containerd[1458]: time="2025-04-30T03:29:56.972480658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:56.974080 containerd[1458]: time="2025-04-30T03:29:56.974029519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 03:29:56.975906 containerd[1458]: time="2025-04-30T03:29:56.975818877Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:56.978800 containerd[1458]: time="2025-04-30T03:29:56.978735225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:56.979750 containerd[1458]: time="2025-04-30T03:29:56.979690819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.944847288s" Apr 30 03:29:56.979750 containerd[1458]: time="2025-04-30T03:29:56.979739382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 03:29:56.981158 containerd[1458]: time="2025-04-30T03:29:56.981103522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 03:29:56.983210 containerd[1458]: time="2025-04-30T03:29:56.983173714Z" level=info msg="CreateContainer within sandbox \"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 03:29:57.024786 containerd[1458]: time="2025-04-30T03:29:57.024718546Z" level=info msg="CreateContainer within sandbox \"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"07e2567013a5587e9a550f32fd73d2a54eab8f900806fdfbad3833c3ac6360c3\"" Apr 30 03:29:57.025436 containerd[1458]: time="2025-04-30T03:29:57.025385513Z" level=info msg="StartContainer for \"07e2567013a5587e9a550f32fd73d2a54eab8f900806fdfbad3833c3ac6360c3\"" Apr 30 03:29:57.072197 systemd[1]: Started cri-containerd-07e2567013a5587e9a550f32fd73d2a54eab8f900806fdfbad3833c3ac6360c3.scope - libcontainer container 07e2567013a5587e9a550f32fd73d2a54eab8f900806fdfbad3833c3ac6360c3. Apr 30 03:29:57.148611 containerd[1458]: time="2025-04-30T03:29:57.148515546Z" level=info msg="StartContainer for \"07e2567013a5587e9a550f32fd73d2a54eab8f900806fdfbad3833c3ac6360c3\" returns successfully" Apr 30 03:29:57.430779 containerd[1458]: time="2025-04-30T03:29:57.430677679Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:29:57.432069 containerd[1458]: time="2025-04-30T03:29:57.431952540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 03:29:57.434510 containerd[1458]: time="2025-04-30T03:29:57.434443820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 453.283501ms" Apr 30 03:29:57.434510 containerd[1458]: time="2025-04-30T03:29:57.434499326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 03:29:57.435994 containerd[1458]: time="2025-04-30T03:29:57.435632548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 03:29:57.437144 containerd[1458]: time="2025-04-30T03:29:57.437108200Z" level=info msg="CreateContainer within sandbox \"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 03:29:57.455308 containerd[1458]: time="2025-04-30T03:29:57.455224671Z" level=info msg="CreateContainer within sandbox \"f0fd632e0dc5363a3c180b074548a3c0b1e54b052714b0e2219728a179f4040a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea0e643045d40d5e2b493786503a2778003bb7950135776b3e0363b12ff70c2c\"" Apr 30 03:29:57.455948 containerd[1458]: time="2025-04-30T03:29:57.455887200Z" level=info msg="StartContainer for \"ea0e643045d40d5e2b493786503a2778003bb7950135776b3e0363b12ff70c2c\"" Apr 30 03:29:57.493240 systemd[1]: Started cri-containerd-ea0e643045d40d5e2b493786503a2778003bb7950135776b3e0363b12ff70c2c.scope - libcontainer container ea0e643045d40d5e2b493786503a2778003bb7950135776b3e0363b12ff70c2c. Apr 30 03:29:57.540299 containerd[1458]: time="2025-04-30T03:29:57.540220902Z" level=info msg="StartContainer for \"ea0e643045d40d5e2b493786503a2778003bb7950135776b3e0363b12ff70c2c\" returns successfully" Apr 30 03:29:57.791843 kubelet[2580]: I0430 03:29:57.791611 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f865c57f-769hr" podStartSLOduration=34.595621256 podStartE2EDuration="39.79159424s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:29:47.838733277 +0000 UTC m=+55.452140891" lastFinishedPulling="2025-04-30 03:29:53.034706261 +0000 UTC m=+60.648113875" observedRunningTime="2025-04-30 03:29:53.835428863 +0000 UTC m=+61.448836477" watchObservedRunningTime="2025-04-30 03:29:57.79159424 +0000 UTC m=+65.405001854" Apr 30 03:29:58.467893 containerd[1458]: time="2025-04-30T03:29:58.467840546Z" level=info msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\"" Apr 30 03:29:58.682001 kubelet[2580]: I0430 03:29:58.681951 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:58.966467 kubelet[2580]: I0430 03:29:58.966357 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f865c57f-s7xvb" podStartSLOduration=34.758040497 podStartE2EDuration="40.966336706s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:29:51.227170252 +0000 UTC m=+58.840577867" lastFinishedPulling="2025-04-30 03:29:57.435466462 +0000 UTC m=+65.048874076" observedRunningTime="2025-04-30 03:29:57.791402005 +0000 UTC m=+65.404809619" watchObservedRunningTime="2025-04-30 03:29:58.966336706 +0000 UTC m=+66.579744320" Apr 30 03:29:59.161509 kubelet[2580]: I0430 03:29:59.161440 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.966 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.966 [INFO][5366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" iface="eth0" netns="/var/run/netns/cni-f0c48bbf-ad02-49de-5413-71f0671cc8f2" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.967 [INFO][5366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" iface="eth0" netns="/var/run/netns/cni-f0c48bbf-ad02-49de-5413-71f0671cc8f2" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.968 [INFO][5366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" iface="eth0" netns="/var/run/netns/cni-f0c48bbf-ad02-49de-5413-71f0671cc8f2" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.968 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.968 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.991 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" HandleID="k8s-pod-network.5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.991 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:58.992 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:59.181 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" HandleID="k8s-pod-network.5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:59.181 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" HandleID="k8s-pod-network.5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:59.182 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:59.188038 containerd[1458]: 2025-04-30 03:29:59.185 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857" Apr 30 03:29:59.188515 containerd[1458]: time="2025-04-30T03:29:59.188263812Z" level=info msg="TearDown network for sandbox \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\" successfully" Apr 30 03:29:59.188515 containerd[1458]: time="2025-04-30T03:29:59.188291464Z" level=info msg="StopPodSandbox for \"5d6e973eeb9b4a7e5f082edf51ef7a2ba64c1df4f6c45be96e5b7d97521e9857\" returns successfully" Apr 30 03:29:59.188960 containerd[1458]: time="2025-04-30T03:29:59.188936839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd49c77ff-998x6,Uid:7124ff7f-f649-4e24-b218-0ed2909fc6b0,Namespace:calico-system,Attempt:1,}" Apr 30 03:29:59.191753 systemd[1]: run-netns-cni\x2df0c48bbf\x2dad02\x2d49de\x2d5413\x2d71f0671cc8f2.mount: Deactivated successfully. Apr 30 03:29:59.894337 systemd-networkd[1398]: calicab07534847: Link UP Apr 30 03:29:59.896059 systemd-networkd[1398]: calicab07534847: Gained carrier Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.800 [INFO][5385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0 calico-kube-controllers-dd49c77ff- calico-system 7124ff7f-f649-4e24-b218-0ed2909fc6b0 1065 0 2025-04-30 03:29:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dd49c77ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dd49c77ff-998x6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicab07534847 [] []}} ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.800 [INFO][5385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.839 [INFO][5399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" HandleID="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.848 [INFO][5399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" HandleID="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000252760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dd49c77ff-998x6", "timestamp":"2025-04-30 03:29:59.839123989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.848 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.848 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.849 [INFO][5399] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.851 [INFO][5399] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.856 [INFO][5399] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.861 [INFO][5399] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.866 [INFO][5399] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.868 [INFO][5399] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.868 [INFO][5399] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.870 [INFO][5399] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8 Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.874 [INFO][5399] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.885 [INFO][5399] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.885 [INFO][5399] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" host="localhost" Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.885 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 03:29:59.915083 containerd[1458]: 2025-04-30 03:29:59.885 [INFO][5399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" HandleID="k8s-pod-network.65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Workload="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.889 [INFO][5385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0", GenerateName:"calico-kube-controllers-dd49c77ff-", Namespace:"calico-system", SelfLink:"", UID:"7124ff7f-f649-4e24-b218-0ed2909fc6b0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd49c77ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dd49c77ff-998x6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicab07534847", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.890 [INFO][5385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.890 [INFO][5385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicab07534847 ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.893 [INFO][5385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.895 [INFO][5385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0", GenerateName:"calico-kube-controllers-dd49c77ff-", Namespace:"calico-system", SelfLink:"", UID:"7124ff7f-f649-4e24-b218-0ed2909fc6b0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 3, 29, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dd49c77ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8", Pod:"calico-kube-controllers-dd49c77ff-998x6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicab07534847", MAC:"fa:b8:0e:34:e6:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 03:29:59.916136 containerd[1458]: 2025-04-30 03:29:59.909 [INFO][5385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8" Namespace="calico-system" Pod="calico-kube-controllers-dd49c77ff-998x6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dd49c77ff--998x6-eth0" Apr 30 03:29:59.942217 containerd[1458]: time="2025-04-30T03:29:59.942052595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:29:59.942217 containerd[1458]: time="2025-04-30T03:29:59.942151202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:29:59.942217 containerd[1458]: time="2025-04-30T03:29:59.942166722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.942466 containerd[1458]: time="2025-04-30T03:29:59.942287831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:29:59.976683 systemd[1]: Started cri-containerd-65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8.scope - libcontainer container 65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8. Apr 30 03:30:00.023435 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:30:00.054054 containerd[1458]: time="2025-04-30T03:30:00.054012914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dd49c77ff-998x6,Uid:7124ff7f-f649-4e24-b218-0ed2909fc6b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8\"" Apr 30 03:30:00.285294 containerd[1458]: time="2025-04-30T03:30:00.285145327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:00.286119 containerd[1458]: time="2025-04-30T03:30:00.286048562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 03:30:00.287538 containerd[1458]: time="2025-04-30T03:30:00.287500478Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:00.289927 containerd[1458]: time="2025-04-30T03:30:00.289840030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:00.290438 containerd[1458]: time="2025-04-30T03:30:00.290378683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.854707893s" Apr 30 03:30:00.290438 containerd[1458]: time="2025-04-30T03:30:00.290419681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 03:30:00.291842 containerd[1458]: time="2025-04-30T03:30:00.291506654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 03:30:00.293277 containerd[1458]: time="2025-04-30T03:30:00.293216802Z" level=info msg="CreateContainer within sandbox \"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 03:30:00.312305 containerd[1458]: time="2025-04-30T03:30:00.312246443Z" level=info msg="CreateContainer within sandbox \"6b83cf863220aff42894c35c82e579ae0f6749b447473b2a9e05c96cec81f0f3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"43d5b5f95087458926d427547407b4e34b9d2cf10e7ce6552021dd5a8537ec35\"" Apr 30 03:30:00.312994 containerd[1458]: time="2025-04-30T03:30:00.312950649Z" level=info msg="StartContainer for \"43d5b5f95087458926d427547407b4e34b9d2cf10e7ce6552021dd5a8537ec35\"" Apr 30 03:30:00.352196 systemd[1]: Started cri-containerd-43d5b5f95087458926d427547407b4e34b9d2cf10e7ce6552021dd5a8537ec35.scope - libcontainer container 43d5b5f95087458926d427547407b4e34b9d2cf10e7ce6552021dd5a8537ec35. Apr 30 03:30:00.388675 containerd[1458]: time="2025-04-30T03:30:00.388626082Z" level=info msg="StartContainer for \"43d5b5f95087458926d427547407b4e34b9d2cf10e7ce6552021dd5a8537ec35\" returns successfully" Apr 30 03:30:00.567098 kubelet[2580]: I0430 03:30:00.567048 2580 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 03:30:00.567098 kubelet[2580]: I0430 03:30:00.567084 2580 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 03:30:00.701253 kubelet[2580]: I0430 03:30:00.701141 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nwhp2" podStartSLOduration=33.254205793 podStartE2EDuration="42.701088863s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:29:50.844481184 +0000 UTC m=+58.457888798" lastFinishedPulling="2025-04-30 03:30:00.291364254 +0000 UTC m=+67.904771868" observedRunningTime="2025-04-30 03:30:00.700572453 +0000 UTC m=+68.313980067" watchObservedRunningTime="2025-04-30 03:30:00.701088863 +0000 UTC m=+68.314496477" Apr 30 03:30:00.898717 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:44156.service - OpenSSH per-connection server daemon (10.0.0.1:44156). Apr 30 03:30:00.980436 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 44156 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:00.982777 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:00.990217 systemd-logind[1443]: New session 17 of user core. Apr 30 03:30:00.997120 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:30:01.158069 kubelet[2580]: E0430 03:30:01.157945 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:30:01.171682 sshd[5506]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:01.177454 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:44156.service: Deactivated successfully. Apr 30 03:30:01.182688 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:30:01.184006 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:30:01.185127 systemd-logind[1443]: Removed session 17. Apr 30 03:30:01.604228 systemd-networkd[1398]: calicab07534847: Gained IPv6LL Apr 30 03:30:04.605211 containerd[1458]: time="2025-04-30T03:30:04.604295241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.732521 containerd[1458]: time="2025-04-30T03:30:04.732412253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 03:30:04.880594 containerd[1458]: time="2025-04-30T03:30:04.880426096Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.953919 containerd[1458]: time="2025-04-30T03:30:04.953823170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:04.955254 containerd[1458]: time="2025-04-30T03:30:04.955165228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.663607927s" Apr 30 03:30:04.955254 containerd[1458]: time="2025-04-30T03:30:04.955236403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 03:30:04.965734 containerd[1458]: time="2025-04-30T03:30:04.965673466Z" level=info msg="CreateContainer within sandbox \"65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 03:30:05.075318 kubelet[2580]: I0430 03:30:05.075240 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 03:30:06.188413 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:44160.service - OpenSSH per-connection server daemon (10.0.0.1:44160). Apr 30 03:30:06.332464 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 44160 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:06.334723 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:06.339490 systemd-logind[1443]: New session 18 of user core. Apr 30 03:30:06.351173 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:30:06.354599 containerd[1458]: time="2025-04-30T03:30:06.354547887Z" level=info msg="CreateContainer within sandbox \"65a2124a38ef718cea558be8f30c72d4d0c0b78bad005238db3335adae3f0dc8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f91c4706650a31b2edc6892f5a7b3b225415b5fc8d9154ff659fb1772d2c8cf7\"" Apr 30 03:30:06.355619 containerd[1458]: time="2025-04-30T03:30:06.355559357Z" level=info msg="StartContainer for \"f91c4706650a31b2edc6892f5a7b3b225415b5fc8d9154ff659fb1772d2c8cf7\"" Apr 30 03:30:06.428118 systemd[1]: Started cri-containerd-f91c4706650a31b2edc6892f5a7b3b225415b5fc8d9154ff659fb1772d2c8cf7.scope - libcontainer container f91c4706650a31b2edc6892f5a7b3b225415b5fc8d9154ff659fb1772d2c8cf7. Apr 30 03:30:06.707322 containerd[1458]: time="2025-04-30T03:30:06.707233822Z" level=info msg="StartContainer for \"f91c4706650a31b2edc6892f5a7b3b225415b5fc8d9154ff659fb1772d2c8cf7\" returns successfully" Apr 30 03:30:06.716204 sshd[5555]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:06.721468 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:44160.service: Deactivated successfully. Apr 30 03:30:06.725672 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:30:06.727002 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:30:06.732209 systemd-logind[1443]: Removed session 18. Apr 30 03:30:06.949553 kubelet[2580]: I0430 03:30:06.949384 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dd49c77ff-998x6" podStartSLOduration=44.048661948 podStartE2EDuration="48.949364869s" podCreationTimestamp="2025-04-30 03:29:18 +0000 UTC" firstStartedPulling="2025-04-30 03:30:00.055507422 +0000 UTC m=+67.668915036" lastFinishedPulling="2025-04-30 03:30:04.956210343 +0000 UTC m=+72.569617957" observedRunningTime="2025-04-30 03:30:06.949004355 +0000 UTC m=+74.562411969" watchObservedRunningTime="2025-04-30 03:30:06.949364869 +0000 UTC m=+74.562772483" Apr 30 03:30:11.734707 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:51920.service - OpenSSH per-connection server daemon (10.0.0.1:51920). Apr 30 03:30:11.778534 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 51920 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:11.780579 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:11.785420 systemd-logind[1443]: New session 19 of user core. Apr 30 03:30:11.793257 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:30:12.618875 sshd[5655]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:12.624386 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:51920.service: Deactivated successfully. Apr 30 03:30:12.627297 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:30:12.628324 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:30:12.629492 systemd-logind[1443]: Removed session 19. Apr 30 03:30:13.467562 kubelet[2580]: E0430 03:30:13.467492 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:30:17.643307 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:37944.service - OpenSSH per-connection server daemon (10.0.0.1:37944). Apr 30 03:30:17.681156 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 37944 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:17.683454 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:17.687790 systemd-logind[1443]: New session 20 of user core. Apr 30 03:30:17.696125 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:30:17.820730 sshd[5670]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:17.836976 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:37944.service: Deactivated successfully. Apr 30 03:30:17.839848 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:30:17.842541 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:30:17.851310 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:37960.service - OpenSSH per-connection server daemon (10.0.0.1:37960). Apr 30 03:30:17.852538 systemd-logind[1443]: Removed session 20. Apr 30 03:30:17.887855 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 37960 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:17.890051 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:17.895360 systemd-logind[1443]: New session 21 of user core. Apr 30 03:30:17.909182 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:30:18.395642 sshd[5684]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:18.405419 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:37960.service: Deactivated successfully. Apr 30 03:30:18.407708 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:30:18.409404 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:30:18.414912 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:37972.service - OpenSSH per-connection server daemon (10.0.0.1:37972). Apr 30 03:30:18.416145 systemd-logind[1443]: Removed session 21. Apr 30 03:30:18.460857 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 37972 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:18.463166 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:18.468405 systemd-logind[1443]: New session 22 of user core. Apr 30 03:30:18.475068 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:30:21.548472 sshd[5697]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:21.559000 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:37972.service: Deactivated successfully. Apr 30 03:30:21.562061 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:30:21.562926 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:30:21.572756 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:37988.service - OpenSSH per-connection server daemon (10.0.0.1:37988). Apr 30 03:30:21.574250 systemd-logind[1443]: Removed session 22. Apr 30 03:30:21.631741 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 37988 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:21.635601 sshd[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:21.646576 systemd-logind[1443]: New session 23 of user core. Apr 30 03:30:21.650958 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:30:21.956460 sshd[5721]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:21.966542 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:37988.service: Deactivated successfully. Apr 30 03:30:21.968827 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:30:21.970978 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:30:21.982596 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:37998.service - OpenSSH per-connection server daemon (10.0.0.1:37998). Apr 30 03:30:21.984100 systemd-logind[1443]: Removed session 23. Apr 30 03:30:22.017841 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 37998 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:22.019988 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:22.026969 systemd-logind[1443]: New session 24 of user core. Apr 30 03:30:22.032070 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:30:22.220727 sshd[5734]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:22.226670 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:37998.service: Deactivated successfully. Apr 30 03:30:22.229111 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:30:22.229759 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:30:22.230955 systemd-logind[1443]: Removed session 24. Apr 30 03:30:25.466973 kubelet[2580]: E0430 03:30:25.466925 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:30:27.244305 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:54072.service - OpenSSH per-connection server daemon (10.0.0.1:54072). Apr 30 03:30:27.287990 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 54072 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:27.290023 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:27.295351 systemd-logind[1443]: New session 25 of user core. Apr 30 03:30:27.303170 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:30:27.478743 sshd[5768]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:27.483082 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:54072.service: Deactivated successfully. Apr 30 03:30:27.485625 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:30:27.486569 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:30:27.487874 systemd-logind[1443]: Removed session 25. Apr 30 03:30:29.467782 kubelet[2580]: E0430 03:30:29.467713 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:30:32.491685 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Apr 30 03:30:32.537875 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:32.540072 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:32.544075 systemd-logind[1443]: New session 26 of user core. Apr 30 03:30:32.554057 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:30:32.684688 sshd[5813]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:32.689067 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:54082.service: Deactivated successfully. Apr 30 03:30:32.691379 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:30:32.692059 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:30:32.693028 systemd-logind[1443]: Removed session 26. Apr 30 03:30:37.467304 kubelet[2580]: E0430 03:30:37.467233 2580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:30:37.707286 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:42040.service - OpenSSH per-connection server daemon (10.0.0.1:42040). Apr 30 03:30:37.748944 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:37.750864 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:37.755445 systemd-logind[1443]: New session 27 of user core. Apr 30 03:30:37.768106 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:30:37.878331 sshd[5851]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:37.883377 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:42040.service: Deactivated successfully. Apr 30 03:30:37.886025 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:30:37.886738 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:30:37.887885 systemd-logind[1443]: Removed session 27. Apr 30 03:30:42.892756 systemd[1]: Started sshd@27-10.0.0.97:22-10.0.0.1:42054.service - OpenSSH per-connection server daemon (10.0.0.1:42054). Apr 30 03:30:42.933883 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 42054 ssh2: RSA SHA256:JqQv5N7VWbGaMrR2Isax9k0rWzP6CK6yVEYZV3EuhEo Apr 30 03:30:42.935780 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:42.940239 systemd-logind[1443]: New session 28 of user core. Apr 30 03:30:42.949192 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:30:43.063177 sshd[5867]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:43.067306 systemd[1]: sshd@27-10.0.0.97:22-10.0.0.1:42054.service: Deactivated successfully. Apr 30 03:30:43.069724 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:30:43.070514 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:30:43.071593 systemd-logind[1443]: Removed session 28.