May 13 00:20:26.957517 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:20:26.957537 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:26.957548 kernel: BIOS-provided physical RAM map: May 13 00:20:26.957555 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:20:26.957561 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:20:26.957567 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:20:26.957574 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:20:26.957580 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:20:26.957586 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:20:26.957595 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:20:26.957601 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:20:26.957607 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:20:26.957613 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:20:26.957620 kernel: NX (Execute Disable) protection: active May 13 00:20:26.957627 kernel: APIC: Static calls initialized May 13 00:20:26.957636 kernel: SMBIOS 2.8 present. May 13 00:20:26.957643 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:20:26.957650 kernel: Hypervisor detected: KVM May 13 00:20:26.957656 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:20:26.957663 kernel: kvm-clock: using sched offset of 2240883075 cycles May 13 00:20:26.957670 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:20:26.957677 kernel: tsc: Detected 2794.748 MHz processor May 13 00:20:26.957684 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:20:26.957700 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:20:26.957706 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:20:26.957716 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 00:20:26.957740 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:20:26.957747 kernel: Using GB pages for direct mapping May 13 00:20:26.957754 kernel: ACPI: Early table checksum verification disabled May 13 00:20:26.957761 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:20:26.957768 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957775 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957782 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957791 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:20:26.957798 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957805 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957812 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957819 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:20:26.957825 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:20:26.957832 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:20:26.957842 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:20:26.957852 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:20:26.957859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:20:26.957866 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:20:26.957873 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:20:26.957880 kernel: No NUMA configuration found May 13 00:20:26.957887 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:20:26.957894 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:20:26.957903 kernel: Zone ranges: May 13 00:20:26.957911 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:20:26.957918 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:20:26.957925 kernel: Normal empty May 13 00:20:26.957932 kernel: Movable zone start for each node May 13 00:20:26.957939 kernel: Early memory node ranges May 13 00:20:26.957946 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:20:26.957953 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:20:26.957960 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:20:26.957969 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:20:26.957976 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:20:26.957983 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:20:26.957990 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:20:26.957997 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:20:26.958004 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:20:26.958011 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:20:26.958018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:20:26.958026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:20:26.958035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:20:26.958042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:20:26.958049 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:20:26.958056 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:20:26.958063 kernel: TSC deadline timer available May 13 00:20:26.958070 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:20:26.958077 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 00:20:26.958084 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:20:26.958091 kernel: kvm-guest: setup PV sched yield May 13 00:20:26.958099 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:20:26.958108 kernel: Booting paravirtualized kernel on KVM May 13 00:20:26.958115 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:20:26.958122 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 00:20:26.958145 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 00:20:26.958160 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 00:20:26.958181 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:20:26.958189 kernel: kvm-guest: PV spinlocks enabled May 13 00:20:26.958196 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:20:26.958204 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:26.958215 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:20:26.958221 kernel: random: crng init done May 13 00:20:26.958229 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:20:26.958236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:20:26.958243 kernel: Fallback order for Node 0: 0 May 13 00:20:26.958250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:20:26.958261 kernel: Policy zone: DMA32 May 13 00:20:26.958268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:20:26.958278 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136900K reserved, 0K cma-reserved) May 13 00:20:26.958285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:20:26.958292 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:20:26.958299 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:20:26.958306 kernel: Dynamic Preempt: voluntary May 13 00:20:26.958313 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:20:26.958321 kernel: rcu: RCU event tracing is enabled. May 13 00:20:26.958328 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:20:26.958335 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:20:26.958345 kernel: Rude variant of Tasks RCU enabled. May 13 00:20:26.958352 kernel: Tracing variant of Tasks RCU enabled. May 13 00:20:26.958359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:20:26.958366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:20:26.958373 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:20:26.958380 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:20:26.958387 kernel: Console: colour VGA+ 80x25 May 13 00:20:26.958394 kernel: printk: console [ttyS0] enabled May 13 00:20:26.958401 kernel: ACPI: Core revision 20230628 May 13 00:20:26.958410 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:20:26.958417 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:20:26.958424 kernel: x2apic enabled May 13 00:20:26.958432 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:20:26.958439 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 00:20:26.958446 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 00:20:26.958453 kernel: kvm-guest: setup PV IPIs May 13 00:20:26.958470 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:20:26.958477 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:20:26.958484 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:20:26.958492 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:20:26.958499 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:20:26.958509 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:20:26.958516 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:20:26.958524 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:20:26.958531 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:20:26.958539 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:20:26.958549 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:20:26.958556 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:20:26.958564 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:20:26.958571 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 00:20:26.958579 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 00:20:26.958587 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 00:20:26.958594 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:20:26.958602 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:20:26.958611 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:20:26.958619 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:20:26.958626 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:20:26.958634 kernel: Freeing SMP alternatives memory: 32K May 13 00:20:26.958641 kernel: pid_max: default: 32768 minimum: 301 May 13 00:20:26.958648 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:20:26.958656 kernel: landlock: Up and running. May 13 00:20:26.958663 kernel: SELinux: Initializing. May 13 00:20:26.958670 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:26.958680 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:20:26.958695 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:20:26.958702 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:26.958710 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:26.958738 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:20:26.958745 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:20:26.958753 kernel: ... version: 0 May 13 00:20:26.958760 kernel: ... bit width: 48 May 13 00:20:26.958768 kernel: ... generic registers: 6 May 13 00:20:26.958778 kernel: ... value mask: 0000ffffffffffff May 13 00:20:26.958785 kernel: ... max period: 00007fffffffffff May 13 00:20:26.958794 kernel: ... fixed-purpose events: 0 May 13 00:20:26.958803 kernel: ... event mask: 000000000000003f May 13 00:20:26.958811 kernel: signal: max sigframe size: 1776 May 13 00:20:26.958820 kernel: rcu: Hierarchical SRCU implementation. May 13 00:20:26.958827 kernel: rcu: Max phase no-delay instances is 400. May 13 00:20:26.958835 kernel: smp: Bringing up secondary CPUs ... May 13 00:20:26.958842 kernel: smpboot: x86: Booting SMP configuration: May 13 00:20:26.958851 kernel: .... node #0, CPUs: #1 #2 #3 May 13 00:20:26.958859 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:20:26.958866 kernel: smpboot: Max logical packages: 1 May 13 00:20:26.958874 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:20:26.958881 kernel: devtmpfs: initialized May 13 00:20:26.958888 kernel: x86/mm: Memory block size: 128MB May 13 00:20:26.958896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:20:26.958903 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:20:26.958911 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:20:26.958920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:20:26.958928 kernel: audit: initializing netlink subsys (disabled) May 13 00:20:26.958935 kernel: audit: type=2000 audit(1747095627.158:1): state=initialized audit_enabled=0 res=1 May 13 00:20:26.958942 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:20:26.958950 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:20:26.958957 kernel: cpuidle: using governor menu May 13 00:20:26.958965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:20:26.958972 kernel: dca service started, version 1.12.1 May 13 00:20:26.958980 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:20:26.958989 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 00:20:26.958997 kernel: PCI: Using configuration type 1 for base access May 13 00:20:26.959004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:20:26.959012 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:20:26.959019 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:20:26.959026 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:20:26.959034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:20:26.959041 kernel: ACPI: Added _OSI(Module Device) May 13 00:20:26.959048 kernel: ACPI: Added _OSI(Processor Device) May 13 00:20:26.959058 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:20:26.959074 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:20:26.959089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:20:26.959105 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:20:26.959112 kernel: ACPI: Interpreter enabled May 13 00:20:26.959119 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:20:26.959127 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:20:26.959134 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:20:26.959142 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:20:26.959151 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:20:26.959163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:20:26.959338 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:20:26.959467 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:20:26.959588 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:20:26.959598 kernel: PCI host bridge to bus 0000:00 May 13 00:20:26.959791 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:20:26.959913 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:20:26.960023 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:20:26.960131 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:20:26.960243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:20:26.960354 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:20:26.960465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:20:26.960603 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:20:26.960770 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:20:26.960894 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:20:26.961013 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:20:26.961133 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:20:26.961253 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:20:26.961384 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:20:26.961509 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:20:26.961630 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:20:26.961774 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:20:26.961905 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:20:26.962026 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:20:26.962145 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:20:26.962265 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:20:26.962398 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:20:26.962518 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:20:26.962638 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:20:26.962794 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:20:26.962917 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:20:26.963045 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:20:26.963165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:20:26.963297 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:20:26.963416 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:20:26.963534 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:20:26.963661 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:20:26.963811 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:20:26.963822 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:20:26.963830 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:20:26.963841 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:20:26.963849 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:20:26.963857 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:20:26.963864 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:20:26.963871 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:20:26.963879 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:20:26.963886 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:20:26.963894 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:20:26.963901 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:20:26.963911 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:20:26.963918 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:20:26.963926 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:20:26.963933 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:20:26.963941 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:20:26.963948 kernel: iommu: Default domain type: Translated May 13 00:20:26.963955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:20:26.963963 kernel: PCI: Using ACPI for IRQ routing May 13 00:20:26.963971 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:20:26.963980 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:20:26.963988 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:20:26.964109 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:20:26.964229 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:20:26.964348 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:20:26.964358 kernel: vgaarb: loaded May 13 00:20:26.964366 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:20:26.964373 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:20:26.964384 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:20:26.964392 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:20:26.964399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:20:26.964407 kernel: pnp: PnP ACPI init May 13 00:20:26.964542 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:20:26.964553 kernel: pnp: PnP ACPI: found 6 devices May 13 00:20:26.964560 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:20:26.964568 kernel: NET: Registered PF_INET protocol family May 13 00:20:26.964578 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:20:26.964586 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:20:26.964594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:20:26.964602 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:20:26.964609 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:20:26.964617 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:20:26.964624 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:26.964632 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:20:26.964639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:20:26.964649 kernel: NET: Registered PF_XDP protocol family May 13 00:20:26.964784 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:20:26.964896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:20:26.965006 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:20:26.965115 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:20:26.965226 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:20:26.965334 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:20:26.965344 kernel: PCI: CLS 0 bytes, default 64 May 13 00:20:26.965356 kernel: Initialise system trusted keyrings May 13 00:20:26.965363 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:20:26.965371 kernel: Key type asymmetric registered May 13 00:20:26.965378 kernel: Asymmetric key parser 'x509' registered May 13 00:20:26.965385 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:20:26.965393 kernel: io scheduler mq-deadline registered May 13 00:20:26.965400 kernel: io scheduler kyber registered May 13 00:20:26.965407 kernel: io scheduler bfq registered May 13 00:20:26.965415 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:20:26.965423 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:20:26.965433 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:20:26.965440 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:20:26.965448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:20:26.965456 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:20:26.965463 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:20:26.965471 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:20:26.965478 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:20:26.965602 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:20:26.965617 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:20:26.965830 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:20:26.965947 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:20:26 UTC (1747095626) May 13 00:20:26.966059 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:20:26.966069 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 00:20:26.966077 kernel: NET: Registered PF_INET6 protocol family May 13 00:20:26.966084 kernel: Segment Routing with IPv6 May 13 00:20:26.966092 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:20:26.966103 kernel: NET: Registered PF_PACKET protocol family May 13 00:20:26.966111 kernel: Key type dns_resolver registered May 13 00:20:26.966118 kernel: IPI shorthand broadcast: enabled May 13 00:20:26.966125 kernel: sched_clock: Marking stable (646002442, 104425357)->(763830478, -13402679) May 13 00:20:26.966133 kernel: registered taskstats version 1 May 13 00:20:26.966140 kernel: Loading compiled-in X.509 certificates May 13 00:20:26.966148 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:20:26.966155 kernel: Key type .fscrypt registered May 13 00:20:26.966162 kernel: Key type fscrypt-provisioning registered May 13 00:20:26.966172 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:20:26.966180 kernel: ima: Allocated hash algorithm: sha1 May 13 00:20:26.966187 kernel: ima: No architecture policies found May 13 00:20:26.966195 kernel: clk: Disabling unused clocks May 13 00:20:26.966202 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:20:26.966209 kernel: Write protecting the kernel read-only data: 36864k May 13 00:20:26.966217 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:20:26.966224 kernel: Run /init as init process May 13 00:20:26.966231 kernel: with arguments: May 13 00:20:26.966241 kernel: /init May 13 00:20:26.966248 kernel: with environment: May 13 00:20:26.966256 kernel: HOME=/ May 13 00:20:26.966263 kernel: TERM=linux May 13 00:20:26.966270 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:20:26.966279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:20:26.966289 systemd[1]: Detected virtualization kvm. May 13 00:20:26.966297 systemd[1]: Detected architecture x86-64. May 13 00:20:26.966307 systemd[1]: Running in initrd. May 13 00:20:26.966315 systemd[1]: No hostname configured, using default hostname. May 13 00:20:26.966323 systemd[1]: Hostname set to . May 13 00:20:26.966331 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:26.966339 systemd[1]: Queued start job for default target initrd.target. May 13 00:20:26.966347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:26.966355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:26.966364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:20:26.966374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:20:26.966394 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:20:26.966404 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:20:26.966414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:20:26.966424 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:20:26.966433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:26.966441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:26.966449 systemd[1]: Reached target paths.target - Path Units. May 13 00:20:26.966458 systemd[1]: Reached target slices.target - Slice Units. May 13 00:20:26.966466 systemd[1]: Reached target swap.target - Swaps. May 13 00:20:26.966474 systemd[1]: Reached target timers.target - Timer Units. May 13 00:20:26.966482 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:20:26.966490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:20:26.966501 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:20:26.966509 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:20:26.966517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:26.966526 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:20:26.966534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:26.966542 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:20:26.966550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:20:26.966558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:20:26.966566 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:20:26.966577 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:20:26.966585 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:20:26.966593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:20:26.966601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:26.966609 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:20:26.966618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:26.966626 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:20:26.966637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:20:26.966664 systemd-journald[192]: Collecting audit messages is disabled. May 13 00:20:26.966684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:26.966704 systemd-journald[192]: Journal started May 13 00:20:26.966737 systemd-journald[192]: Runtime Journal (/run/log/journal/76829d1c87f84618a8927c17a68f7198) is 6.0M, max 48.4M, 42.3M free. May 13 00:20:26.957040 systemd-modules-load[194]: Inserted module 'overlay' May 13 00:20:26.996271 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:20:26.996287 kernel: Bridge firewalling registered May 13 00:20:26.996297 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:20:26.983672 systemd-modules-load[194]: Inserted module 'br_netfilter' May 13 00:20:26.992381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:20:26.992826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:26.998966 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:27.001039 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:20:27.002007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:20:27.006058 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:20:27.016263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:27.020501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:27.023388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:27.026068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:27.040853 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:20:27.044134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:20:27.052176 dracut-cmdline[228]: dracut-dracut-053 May 13 00:20:27.055155 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:20:27.078332 systemd-resolved[233]: Positive Trust Anchors: May 13 00:20:27.078352 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:27.078383 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:20:27.080836 systemd-resolved[233]: Defaulting to hostname 'linux'. May 13 00:20:27.081869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:20:27.087323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:27.151751 kernel: SCSI subsystem initialized May 13 00:20:27.160743 kernel: Loading iSCSI transport class v2.0-870. May 13 00:20:27.171744 kernel: iscsi: registered transport (tcp) May 13 00:20:27.191940 kernel: iscsi: registered transport (qla4xxx) May 13 00:20:27.191973 kernel: QLogic iSCSI HBA Driver May 13 00:20:27.241272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:20:27.252843 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:20:27.288750 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:20:27.288785 kernel: device-mapper: uevent: version 1.0.3 May 13 00:20:27.288804 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:20:27.329748 kernel: raid6: avx2x4 gen() 30622 MB/s May 13 00:20:27.346740 kernel: raid6: avx2x2 gen() 31297 MB/s May 13 00:20:27.363829 kernel: raid6: avx2x1 gen() 25114 MB/s May 13 00:20:27.363855 kernel: raid6: using algorithm avx2x2 gen() 31297 MB/s May 13 00:20:27.381873 kernel: raid6: .... xor() 19786 MB/s, rmw enabled May 13 00:20:27.381907 kernel: raid6: using avx2x2 recovery algorithm May 13 00:20:27.408748 kernel: xor: automatically using best checksumming function avx May 13 00:20:27.574758 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:20:27.590241 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:20:27.599914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:27.613285 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 13 00:20:27.617974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:27.626901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:20:27.665198 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 13 00:20:27.674441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:20:27.677986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:20:27.745978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:27.757516 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:20:27.771283 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:20:27.773848 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:20:27.777004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:27.779661 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:20:27.783754 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 00:20:27.791091 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:20:27.793388 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:20:27.793407 kernel: GPT:9289727 != 19775487 May 13 00:20:27.793417 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:20:27.793427 kernel: GPT:9289727 != 19775487 May 13 00:20:27.793436 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:20:27.793446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:27.793464 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:20:27.795173 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:20:27.812789 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:20:27.812822 kernel: AES CTR mode by8 optimization enabled May 13 00:20:27.813819 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:20:27.821749 kernel: libata version 3.00 loaded. May 13 00:20:27.830743 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (472) May 13 00:20:27.832771 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473) May 13 00:20:27.835313 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:20:27.835507 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:20:27.837737 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:20:27.837899 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:20:27.843740 kernel: scsi host0: ahci May 13 00:20:27.843930 kernel: scsi host1: ahci May 13 00:20:27.844084 kernel: scsi host2: ahci May 13 00:20:27.844350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:20:27.845681 kernel: scsi host3: ahci May 13 00:20:27.845982 kernel: scsi host4: ahci May 13 00:20:27.846125 kernel: scsi host5: ahci May 13 00:20:27.847559 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:20:27.847580 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:20:27.849462 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:20:27.851086 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:20:27.851102 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:20:27.852710 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:20:27.858246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:20:27.865916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:20:27.868479 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:20:27.877601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:20:27.891893 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:20:27.894550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:20:27.894624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:27.899485 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:27.902823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:27.904152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:27.907783 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:27.907804 disk-uuid[551]: Primary Header is updated. May 13 00:20:27.907804 disk-uuid[551]: Secondary Entries is updated. May 13 00:20:27.907804 disk-uuid[551]: Secondary Header is updated. May 13 00:20:27.912408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:27.907810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:27.914763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:27.920297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:27.981065 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:27.998109 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:20:28.022222 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:28.163885 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:20:28.163961 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:20:28.163973 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:20:28.165629 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:20:28.165750 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:20:28.166754 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:20:28.166770 kernel: ata3.00: applying bridge limits May 13 00:20:28.167745 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:20:28.168747 kernel: ata3.00: configured for UDMA/100 May 13 00:20:28.170745 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:20:28.225755 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:20:28.226128 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:20:28.238749 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:20:28.915779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:20:28.915838 disk-uuid[552]: The operation has completed successfully. May 13 00:20:28.944223 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:20:28.945299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:20:28.970928 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:20:28.974512 sh[596]: Success May 13 00:20:28.987765 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:20:29.017495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:20:29.036838 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:20:29.039105 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:20:29.051499 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:20:29.051532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:29.051543 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:20:29.053295 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:20:29.053311 kernel: BTRFS info (device dm-0): using free space tree May 13 00:20:29.058812 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:20:29.059770 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:20:29.065971 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:20:29.067970 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:20:29.077460 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:29.077497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:29.077508 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:29.080751 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:29.090712 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:20:29.092527 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:29.101858 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:20:29.109866 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:20:29.165518 ignition[690]: Ignition 2.19.0 May 13 00:20:29.165530 ignition[690]: Stage: fetch-offline May 13 00:20:29.165568 ignition[690]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:29.165580 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:29.165681 ignition[690]: parsed url from cmdline: "" May 13 00:20:29.165685 ignition[690]: no config URL provided May 13 00:20:29.165692 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:20:29.165703 ignition[690]: no config at "/usr/lib/ignition/user.ign" May 13 00:20:29.165750 ignition[690]: op(1): [started] loading QEMU firmware config module May 13 00:20:29.165757 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:20:29.177115 ignition[690]: op(1): [finished] loading QEMU firmware config module May 13 00:20:29.193219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:20:29.206032 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:20:29.221551 ignition[690]: parsing config with SHA512: 40826a7f5dbc4b79e31ac37f07542376d5ab9e75f8c6e700df77eb070badf6d5190048171460470428a8d4c8d29c9f4d31ffc113b82d43caef7a05addccc70dd May 13 00:20:29.225245 unknown[690]: fetched base config from "system" May 13 00:20:29.225260 unknown[690]: fetched user config from "qemu" May 13 00:20:29.226920 ignition[690]: fetch-offline: fetch-offline passed May 13 00:20:29.227004 ignition[690]: Ignition finished successfully May 13 00:20:29.228761 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:20:29.232219 systemd-networkd[784]: lo: Link UP May 13 00:20:29.232232 systemd-networkd[784]: lo: Gained carrier May 13 00:20:29.234074 systemd-networkd[784]: Enumeration completed May 13 00:20:29.234194 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:20:29.234452 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:29.234457 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:29.235515 systemd-networkd[784]: eth0: Link UP May 13 00:20:29.235518 systemd-networkd[784]: eth0: Gained carrier May 13 00:20:29.235525 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:29.236746 systemd[1]: Reached target network.target - Network. May 13 00:20:29.238749 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:20:29.249883 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:20:29.255788 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:29.262287 ignition[787]: Ignition 2.19.0 May 13 00:20:29.262301 ignition[787]: Stage: kargs May 13 00:20:29.262487 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:29.262501 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:29.263567 ignition[787]: kargs: kargs passed May 13 00:20:29.263613 ignition[787]: Ignition finished successfully May 13 00:20:29.266562 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:20:29.275871 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:20:29.288979 ignition[797]: Ignition 2.19.0 May 13 00:20:29.288990 ignition[797]: Stage: disks May 13 00:20:29.289176 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 13 00:20:29.289187 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:29.290089 ignition[797]: disks: disks passed May 13 00:20:29.292212 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:20:29.290140 ignition[797]: Ignition finished successfully May 13 00:20:29.293843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:20:29.295342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:20:29.297502 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:20:29.298555 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:20:29.298974 systemd[1]: Reached target basic.target - Basic System. May 13 00:20:29.310892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:20:29.324956 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.52 May 13 00:20:29.324971 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 13 00:20:29.327670 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:20:29.333981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:20:29.340860 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:20:29.429660 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:20:29.431370 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:20:29.431122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:20:29.438810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:20:29.439877 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:20:29.441289 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:20:29.441323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:20:29.441342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:20:29.449716 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:20:29.452751 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 13 00:20:29.452780 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:29.455273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:29.455294 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:29.456878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:20:29.459766 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:29.461580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:20:29.491586 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:20:29.495465 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 13 00:20:29.500220 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:20:29.503628 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:20:29.582558 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:20:29.591831 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:20:29.594486 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:20:29.601752 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:29.618663 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:20:29.622701 ignition[929]: INFO : Ignition 2.19.0 May 13 00:20:29.622701 ignition[929]: INFO : Stage: mount May 13 00:20:29.624431 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:29.624431 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:29.627199 ignition[929]: INFO : mount: mount passed May 13 00:20:29.627998 ignition[929]: INFO : Ignition finished successfully May 13 00:20:29.631195 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:20:29.650819 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:20:30.051179 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:20:30.059902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:20:30.066749 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 13 00:20:30.066782 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:20:30.068251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:20:30.068272 kernel: BTRFS info (device vda6): using free space tree May 13 00:20:30.071757 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:20:30.072500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:20:30.098348 ignition[961]: INFO : Ignition 2.19.0 May 13 00:20:30.098348 ignition[961]: INFO : Stage: files May 13 00:20:30.100421 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:30.100421 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:30.100421 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 13 00:20:30.100421 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:20:30.100421 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:20:30.108434 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:20:30.108434 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:20:30.108434 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:20:30.108434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:20:30.108434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:20:30.108434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:20:30.108434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:20:30.102870 unknown[961]: wrote ssh authorized keys file for user: core May 13 00:20:30.190902 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:20:30.347602 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:20:30.347602 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:20:30.351412 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:20:30.353122 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:30.354900 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:20:30.356606 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:30.358380 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:20:30.360113 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:30.361884 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:20:30.363806 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:30.365712 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:20:30.367510 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:20:30.370086 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:20:30.372539 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:20:30.374715 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:20:30.763479 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:20:30.812950 systemd-networkd[784]: eth0: Gained IPv6LL May 13 00:20:31.261384 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:20:31.261384 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:20:31.265244 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:20:31.265244 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 13 00:20:31.265244 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:20:31.290480 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:31.295742 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:20:31.297325 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:20:31.297325 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 13 00:20:31.297325 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:20:31.297325 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:31.297325 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:20:31.297325 ignition[961]: INFO : files: files passed May 13 00:20:31.297325 ignition[961]: INFO : Ignition finished successfully May 13 00:20:31.298753 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:20:31.311888 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:20:31.314502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:20:31.316365 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:20:31.316469 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:20:31.324058 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:20:31.326840 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:31.326840 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:31.330163 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:20:31.333272 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:20:31.334749 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:20:31.345858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:20:31.370353 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:20:31.370500 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:20:31.371448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:20:31.374207 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:20:31.374579 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:20:31.381836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:20:31.399043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:20:31.420985 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:20:31.430710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:31.433086 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:31.434378 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:20:31.436344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:20:31.436463 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:20:31.438636 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:20:31.440362 systemd[1]: Stopped target basic.target - Basic System. May 13 00:20:31.442402 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:20:31.444460 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:20:31.446488 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:20:31.448661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:20:31.450802 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:20:31.453109 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:20:31.455145 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:20:31.457315 systemd[1]: Stopped target swap.target - Swaps. May 13 00:20:31.459097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:20:31.459225 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:20:31.461350 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:31.462974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:31.465068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:20:31.465192 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:31.467290 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:20:31.467397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:20:31.469617 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:20:31.469745 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:20:31.471773 systemd[1]: Stopped target paths.target - Path Units. May 13 00:20:31.473518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:20:31.476806 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:31.479668 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:20:31.481584 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:20:31.483615 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:20:31.483712 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:20:31.485450 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:20:31.485541 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:20:31.487522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:20:31.487645 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:20:31.490165 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:20:31.490270 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:20:31.502934 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:20:31.504351 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:20:31.505782 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:20:31.505990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:31.508044 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:20:31.508191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:20:31.514154 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:20:31.517701 ignition[1015]: INFO : Ignition 2.19.0 May 13 00:20:31.517701 ignition[1015]: INFO : Stage: umount May 13 00:20:31.517701 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:20:31.517701 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:20:31.517701 ignition[1015]: INFO : umount: umount passed May 13 00:20:31.517701 ignition[1015]: INFO : Ignition finished successfully May 13 00:20:31.514267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:20:31.518151 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:20:31.518283 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:20:31.519209 systemd[1]: Stopped target network.target - Network. May 13 00:20:31.520652 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:20:31.520707 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:20:31.521187 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:20:31.521230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:20:31.521517 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:20:31.521558 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:20:31.522043 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:20:31.522087 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:20:31.522511 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:20:31.530159 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:20:31.539767 systemd-networkd[784]: eth0: DHCPv6 lease lost May 13 00:20:31.540226 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:20:31.540352 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:20:31.542379 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:20:31.542506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:20:31.545556 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:20:31.545623 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:31.550806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:20:31.551403 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:20:31.551459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:20:31.551968 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:20:31.552013 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:31.552280 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:20:31.552321 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:20:31.552625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:20:31.552669 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:31.553206 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:31.566052 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:20:31.566207 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:20:31.575701 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:20:31.575898 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:31.576584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:20:31.576634 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:20:31.579465 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:20:31.579503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:31.579947 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:20:31.579993 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:20:31.580652 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:20:31.580696 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:20:31.581473 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:20:31.581517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:20:31.593998 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:20:31.594469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:20:31.594541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:31.595036 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:20:31.595090 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:31.595338 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:20:31.595389 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:31.595681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:20:31.595750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:31.604247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:20:31.604375 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:20:31.625543 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:20:31.829469 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:20:31.829609 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:20:31.830458 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:20:31.832739 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:20:31.832793 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:20:31.845860 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:20:31.854114 systemd[1]: Switching root. May 13 00:20:31.887961 systemd-journald[192]: Journal stopped May 13 00:20:33.076039 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 13 00:20:33.076110 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:20:33.076133 kernel: SELinux: policy capability open_perms=1 May 13 00:20:33.076149 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:20:33.076160 kernel: SELinux: policy capability always_check_network=0 May 13 00:20:33.076177 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:20:33.076189 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:20:33.076200 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:20:33.076211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:20:33.076222 kernel: audit: type=1403 audit(1747095632.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:20:33.076234 systemd[1]: Successfully loaded SELinux policy in 40.066ms. May 13 00:20:33.076257 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.275ms. May 13 00:20:33.076270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:20:33.076282 systemd[1]: Detected virtualization kvm. May 13 00:20:33.076294 systemd[1]: Detected architecture x86-64. May 13 00:20:33.076306 systemd[1]: Detected first boot. May 13 00:20:33.076318 systemd[1]: Initializing machine ID from VM UUID. May 13 00:20:33.076330 zram_generator::config[1076]: No configuration found. May 13 00:20:33.076343 systemd[1]: Populated /etc with preset unit settings. May 13 00:20:33.076360 systemd[1]: Queued start job for default target multi-user.target. May 13 00:20:33.076371 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:20:33.076384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:20:33.076396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:20:33.076407 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:20:33.076419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:20:33.076436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:20:33.076448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:20:33.076462 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:20:33.076473 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:20:33.076485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:20:33.076497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:20:33.076509 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:20:33.076529 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:20:33.076542 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:20:33.076554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:20:33.076571 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:20:33.076585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:20:33.076597 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:20:33.076609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:20:33.076621 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:20:33.076634 systemd[1]: Reached target slices.target - Slice Units. May 13 00:20:33.076646 systemd[1]: Reached target swap.target - Swaps. May 13 00:20:33.076658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:20:33.076669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:20:33.076683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:20:33.076701 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:20:33.076713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:20:33.076738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:20:33.076750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:20:33.076761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:20:33.076773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:20:33.076785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:20:33.076796 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:20:33.076808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:33.076823 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:20:33.076835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:20:33.076847 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:20:33.076859 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:20:33.076871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:33.076883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:20:33.076895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:20:33.076908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:33.076921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:20:33.076934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:33.076946 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:20:33.076958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:33.076970 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:20:33.076983 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:20:33.076995 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 00:20:33.077007 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:20:33.077021 kernel: fuse: init (API version 7.39) May 13 00:20:33.077032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:20:33.077043 kernel: loop: module loaded May 13 00:20:33.077055 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:20:33.077067 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:20:33.077079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:20:33.077091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:33.077104 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:20:33.077116 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:20:33.077130 kernel: ACPI: bus type drm_connector registered May 13 00:20:33.077159 systemd-journald[1161]: Collecting audit messages is disabled. May 13 00:20:33.077181 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:20:33.077193 systemd-journald[1161]: Journal started May 13 00:20:33.077215 systemd-journald[1161]: Runtime Journal (/run/log/journal/76829d1c87f84618a8927c17a68f7198) is 6.0M, max 48.4M, 42.3M free. May 13 00:20:33.080086 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:20:33.081479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:20:33.083083 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:20:33.084341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:20:33.085681 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:20:33.087627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:20:33.089208 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:20:33.089422 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:20:33.090972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:33.091192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:33.092897 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:33.093106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:20:33.094590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:33.094819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:33.096463 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:20:33.096676 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:20:33.098091 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:33.098322 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:33.099959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:20:33.101611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:20:33.103235 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:20:33.117127 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:20:33.126809 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:20:33.129117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:20:33.130275 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:20:33.131866 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:20:33.137074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:20:33.138403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:33.140853 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:20:33.142107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:20:33.144849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:20:33.147846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:20:33.152529 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:20:33.155046 systemd-journald[1161]: Time spent on flushing to /var/log/journal/76829d1c87f84618a8927c17a68f7198 is 21.321ms for 944 entries. May 13 00:20:33.155046 systemd-journald[1161]: System Journal (/var/log/journal/76829d1c87f84618a8927c17a68f7198) is 8.0M, max 195.6M, 187.6M free. May 13 00:20:33.196589 systemd-journald[1161]: Received client request to flush runtime journal. May 13 00:20:33.156277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:20:33.163275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:20:33.165301 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:20:33.168700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:20:33.172952 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:20:33.177233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:20:33.187086 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:20:33.199197 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:20:33.200210 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 13 00:20:33.200230 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 13 00:20:33.206382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:20:33.225915 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:20:33.249259 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:20:33.257955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:20:33.274667 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. May 13 00:20:33.274687 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. May 13 00:20:33.280290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:20:33.693981 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:20:33.705867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:20:33.729582 systemd-udevd[1241]: Using default interface naming scheme 'v255'. May 13 00:20:33.744980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:20:33.752882 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:20:33.766886 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:20:33.788745 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 13 00:20:33.792541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1252) May 13 00:20:33.823578 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:20:33.847788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:20:33.850742 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:20:33.859749 kernel: ACPI: button: Power Button [PWRF] May 13 00:20:33.863738 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:20:33.875741 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:20:33.881953 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:20:33.897436 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:20:33.909738 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:20:33.910289 systemd-networkd[1247]: lo: Link UP May 13 00:20:33.910618 systemd-networkd[1247]: lo: Gained carrier May 13 00:20:33.913302 systemd-networkd[1247]: Enumeration completed May 13 00:20:33.914334 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:33.914410 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:20:33.915423 systemd-networkd[1247]: eth0: Link UP May 13 00:20:33.915532 systemd-networkd[1247]: eth0: Gained carrier May 13 00:20:33.915595 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:20:33.926954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:20:33.928272 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:20:33.934613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:20:33.980049 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:20:33.991052 kernel: kvm_amd: TSC scaling supported May 13 00:20:33.991101 kernel: kvm_amd: Nested Virtualization enabled May 13 00:20:33.991117 kernel: kvm_amd: Nested Paging enabled May 13 00:20:33.992312 kernel: kvm_amd: LBR virtualization supported May 13 00:20:33.992340 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 00:20:33.993017 kernel: kvm_amd: Virtual GIF supported May 13 00:20:34.013755 kernel: EDAC MC: Ver: 3.0.0 May 13 00:20:34.046198 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:20:34.058863 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:20:34.060575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:20:34.068023 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:34.102689 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:20:34.104223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:20:34.116830 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:20:34.121322 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:20:34.154596 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:20:34.156063 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:20:34.157388 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:20:34.157415 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:20:34.158481 systemd[1]: Reached target machines.target - Containers. May 13 00:20:34.160499 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:20:34.172839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:20:34.175169 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:20:34.176333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:34.177250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:20:34.180386 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:20:34.185832 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:20:34.188172 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:20:34.198758 kernel: loop0: detected capacity change from 0 to 142488 May 13 00:20:34.203363 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:20:34.212972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:20:34.213749 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:20:34.221748 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:20:34.239748 kernel: loop1: detected capacity change from 0 to 140768 May 13 00:20:34.277755 kernel: loop2: detected capacity change from 0 to 210664 May 13 00:20:34.307741 kernel: loop3: detected capacity change from 0 to 142488 May 13 00:20:34.317761 kernel: loop4: detected capacity change from 0 to 140768 May 13 00:20:34.328749 kernel: loop5: detected capacity change from 0 to 210664 May 13 00:20:34.333596 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:20:34.334233 (sd-merge)[1313]: Merged extensions into '/usr'. May 13 00:20:34.338839 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:20:34.338856 systemd[1]: Reloading... May 13 00:20:34.389768 zram_generator::config[1340]: No configuration found. May 13 00:20:34.424371 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:20:34.519745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:34.583268 systemd[1]: Reloading finished in 243 ms. May 13 00:20:34.603444 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:20:34.618126 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:20:34.638838 systemd[1]: Starting ensure-sysext.service... May 13 00:20:34.640800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:20:34.644841 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... May 13 00:20:34.644856 systemd[1]: Reloading... May 13 00:20:34.663826 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:20:34.664195 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:20:34.665212 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:20:34.665518 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. May 13 00:20:34.665600 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. May 13 00:20:34.673133 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:20:34.673146 systemd-tmpfiles[1386]: Skipping /boot May 13 00:20:34.686807 zram_generator::config[1417]: No configuration found. May 13 00:20:34.687801 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:20:34.687814 systemd-tmpfiles[1386]: Skipping /boot May 13 00:20:34.804189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:20:34.873653 systemd[1]: Reloading finished in 228 ms. May 13 00:20:34.894694 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:20:34.916378 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:20:34.919354 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:20:34.922370 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:20:34.927945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:20:34.931539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:20:34.935552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:34.936075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:34.938638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:34.941789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:34.947069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:34.949324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:34.949785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:34.955008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:34.955247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:34.957247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:34.957475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:34.960123 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:34.960384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:34.966340 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:20:34.973771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:20:34.977411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:34.977840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:34.982064 augenrules[1494]: No rules May 13 00:20:34.985029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:34.987713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:34.992919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:34.994793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:34.996993 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:20:34.998954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:35.001527 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:20:35.003956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:35.004166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:35.006250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:35.006498 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:35.008317 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:35.008623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:35.016237 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:20:35.018056 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:20:35.022065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:35.022254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:20:35.026071 systemd-resolved[1463]: Positive Trust Anchors: May 13 00:20:35.026095 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:20:35.026128 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:20:35.028857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:20:35.029881 systemd-resolved[1463]: Defaulting to hostname 'linux'. May 13 00:20:35.031019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:20:35.032935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:20:35.036592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:20:35.037903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:20:35.038040 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:20:35.038146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:20:35.038579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:20:35.040330 systemd[1]: Finished ensure-sysext.service. May 13 00:20:35.041703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:20:35.041959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:20:35.043561 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:20:35.043788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:20:35.054008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:20:35.054224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:20:35.058303 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:20:35.058591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:20:35.062806 systemd[1]: Reached target network.target - Network. May 13 00:20:35.063901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:20:35.065142 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:20:35.065218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:20:35.075920 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:20:35.136221 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:20:35.137666 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:20:35.138862 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:20:36.428923 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:20:36.428960 systemd-resolved[1463]: Clock change detected. Flushing caches. May 13 00:20:36.430208 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:20:36.430236 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:20:36.431481 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:20:36.431487 systemd-timesyncd[1531]: Initial clock synchronization to Tue 2025-05-13 00:20:36.428902 UTC. May 13 00:20:36.431510 systemd[1]: Reached target paths.target - Path Units. May 13 00:20:36.432430 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:20:36.433615 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:20:36.434838 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:20:36.436098 systemd[1]: Reached target timers.target - Timer Units. May 13 00:20:36.437516 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:20:36.440443 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:20:36.442795 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:20:36.448681 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:20:36.449801 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:20:36.450768 systemd[1]: Reached target basic.target - Basic System. May 13 00:20:36.451866 systemd[1]: System is tainted: cgroupsv1 May 13 00:20:36.451903 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:20:36.451923 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:20:36.453152 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:20:36.455298 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:20:36.457281 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:20:36.460641 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:20:36.461729 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:20:36.463891 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:20:36.466747 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:20:36.471504 jq[1537]: false May 13 00:20:36.470867 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:20:36.479030 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:20:36.486643 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:20:36.489896 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:20:36.491204 extend-filesystems[1538]: Found loop3 May 13 00:20:36.494658 extend-filesystems[1538]: Found loop4 May 13 00:20:36.494658 extend-filesystems[1538]: Found loop5 May 13 00:20:36.494658 extend-filesystems[1538]: Found sr0 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda May 13 00:20:36.494658 extend-filesystems[1538]: Found vda1 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda2 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda3 May 13 00:20:36.494658 extend-filesystems[1538]: Found usr May 13 00:20:36.494658 extend-filesystems[1538]: Found vda4 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda6 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda7 May 13 00:20:36.494658 extend-filesystems[1538]: Found vda9 May 13 00:20:36.494658 extend-filesystems[1538]: Checking size of /dev/vda9 May 13 00:20:36.529248 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:20:36.496419 dbus-daemon[1536]: [system] SELinux support is enabled May 13 00:20:36.530976 extend-filesystems[1538]: Resized partition /dev/vda9 May 13 00:20:36.541570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1249) May 13 00:20:36.494667 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:20:36.541747 extend-filesystems[1564]: resize2fs 1.47.1 (20-May-2024) May 13 00:20:36.496884 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:20:36.545335 update_engine[1556]: I20250513 00:20:36.528159 1556 main.cc:92] Flatcar Update Engine starting May 13 00:20:36.545335 update_engine[1556]: I20250513 00:20:36.529629 1556 update_check_scheduler.cc:74] Next update check in 2m1s May 13 00:20:36.498994 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:20:36.548949 jq[1559]: true May 13 00:20:36.513692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:20:36.514018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:20:36.514358 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:20:36.514679 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:20:36.532732 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:20:36.533049 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:20:36.554795 jq[1570]: true May 13 00:20:36.557607 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:20:36.574592 systemd[1]: Started update-engine.service - Update Engine. May 13 00:20:36.577414 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:20:36.578422 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:20:36.578454 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:20:36.579841 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:20:36.579855 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:20:36.581695 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:20:36.589583 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:20:36.601276 systemd-logind[1552]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:20:36.601296 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:20:36.602504 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:20:36.602504 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:20:36.602504 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:20:36.611301 extend-filesystems[1538]: Resized filesystem in /dev/vda9 May 13 00:20:36.602849 systemd-logind[1552]: New seat seat0. May 13 00:20:36.617806 tar[1567]: linux-amd64/helm May 13 00:20:36.605772 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:20:36.606107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:20:36.608499 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:20:36.622916 bash[1595]: Updated "/home/core/.ssh/authorized_keys" May 13 00:20:36.622424 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:20:36.626972 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:20:36.633012 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:20:36.753725 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:20:36.766208 containerd[1571]: time="2025-05-13T00:20:36.766111289Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:20:36.775781 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:20:36.788682 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:20:36.793263 containerd[1571]: time="2025-05-13T00:20:36.793226890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.794844 containerd[1571]: time="2025-05-13T00:20:36.794816782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:36.794921 containerd[1571]: time="2025-05-13T00:20:36.794907122Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:20:36.794969 containerd[1571]: time="2025-05-13T00:20:36.794957025Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:20:36.797024 containerd[1571]: time="2025-05-13T00:20:36.795172660Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:20:36.796258 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:20:36.796599 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:20:36.797374 containerd[1571]: time="2025-05-13T00:20:36.797182209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.797506 containerd[1571]: time="2025-05-13T00:20:36.797490127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:36.799493 containerd[1571]: time="2025-05-13T00:20:36.799441918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.799850 containerd[1571]: time="2025-05-13T00:20:36.799821219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:36.799850 containerd[1571]: time="2025-05-13T00:20:36.799844222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.799894 containerd[1571]: time="2025-05-13T00:20:36.799864170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:36.799894 containerd[1571]: time="2025-05-13T00:20:36.799875150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.799998 containerd[1571]: time="2025-05-13T00:20:36.799974948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.800257 containerd[1571]: time="2025-05-13T00:20:36.800237821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:20:36.800506 containerd[1571]: time="2025-05-13T00:20:36.800485405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:20:36.800506 containerd[1571]: time="2025-05-13T00:20:36.800503259Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:20:36.800625 containerd[1571]: time="2025-05-13T00:20:36.800607654Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:20:36.800685 containerd[1571]: time="2025-05-13T00:20:36.800668949Z" level=info msg="metadata content store policy set" policy=shared May 13 00:20:36.804618 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:20:36.807140 containerd[1571]: time="2025-05-13T00:20:36.807099291Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:20:36.807185 containerd[1571]: time="2025-05-13T00:20:36.807161808Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:20:36.807185 containerd[1571]: time="2025-05-13T00:20:36.807178570Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:20:36.807229 containerd[1571]: time="2025-05-13T00:20:36.807194059Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:20:36.807229 containerd[1571]: time="2025-05-13T00:20:36.807208476Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:20:36.807401 containerd[1571]: time="2025-05-13T00:20:36.807363967Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:20:36.807985 containerd[1571]: time="2025-05-13T00:20:36.807956609Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:20:36.808160 containerd[1571]: time="2025-05-13T00:20:36.808143870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808207800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808227667Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808244739Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808258926Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808274185Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808291227Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808307798Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808323908Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808339908Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808354345Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808377759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808421100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808437301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808612 containerd[1571]: time="2025-05-13T00:20:36.808453281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808469090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808485792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808501090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808516479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808530626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808551014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808567495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:20:36.808871 containerd[1571]: time="2025-05-13T00:20:36.808582383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.808598453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809079886Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809112858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809134268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809148114Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809202025Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809222093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809234746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809250025Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809263049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809279420Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:20:36.809360 containerd[1571]: time="2025-05-13T00:20:36.809290691Z" level=info msg="NRI interface is disabled by configuration." May 13 00:20:36.809596 containerd[1571]: time="2025-05-13T00:20:36.809398203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:20:36.809740 containerd[1571]: time="2025-05-13T00:20:36.809689439Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:20:36.810141 containerd[1571]: time="2025-05-13T00:20:36.810124505Z" level=info msg="Connect containerd service" May 13 00:20:36.810184 containerd[1571]: time="2025-05-13T00:20:36.810173046Z" level=info msg="using legacy CRI server" May 13 00:20:36.810204 containerd[1571]: time="2025-05-13T00:20:36.810183316Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:20:36.810284 containerd[1571]: time="2025-05-13T00:20:36.810272192Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:20:36.810859 containerd[1571]: time="2025-05-13T00:20:36.810833014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:20:36.811128 containerd[1571]: time="2025-05-13T00:20:36.811022239Z" level=info msg="Start subscribing containerd event" May 13 00:20:36.811128 containerd[1571]: time="2025-05-13T00:20:36.811071872Z" level=info msg="Start recovering state" May 13 00:20:36.811173 containerd[1571]: time="2025-05-13T00:20:36.811149859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:20:36.811258 containerd[1571]: time="2025-05-13T00:20:36.811245778Z" level=info msg="Start event monitor" May 13 00:20:36.811333 containerd[1571]: time="2025-05-13T00:20:36.811322452Z" level=info msg="Start snapshots syncer" May 13 00:20:36.811379 containerd[1571]: time="2025-05-13T00:20:36.811369681Z" level=info msg="Start cni network conf syncer for default" May 13 00:20:36.811454 containerd[1571]: time="2025-05-13T00:20:36.811442487Z" level=info msg="Start streaming server" May 13 00:20:36.811607 containerd[1571]: time="2025-05-13T00:20:36.811593481Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:20:36.811783 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:20:36.812286 containerd[1571]: time="2025-05-13T00:20:36.811693098Z" level=info msg="containerd successfully booted in 0.046626s" May 13 00:20:36.815789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:20:36.825687 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:20:36.828182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:20:36.829864 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:20:36.837601 systemd-networkd[1247]: eth0: Gained IPv6LL May 13 00:20:36.840257 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:20:36.842562 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:20:36.856587 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:20:36.859261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:20:36.864745 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:20:36.886364 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:20:36.887329 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:20:36.889211 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:20:36.893094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:20:36.988836 tar[1567]: linux-amd64/LICENSE May 13 00:20:36.988950 tar[1567]: linux-amd64/README.md May 13 00:20:37.000439 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:20:37.471485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:20:37.473348 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:20:37.476017 systemd[1]: Startup finished in 6.436s (kernel) + 3.857s (userspace) = 10.293s. May 13 00:20:37.486953 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:20:37.910719 kubelet[1672]: E0513 00:20:37.910593 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:37.914531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:37.914870 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:40.969165 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:20:40.985605 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:40308.service - OpenSSH per-connection server daemon (10.0.0.1:40308). May 13 00:20:41.022794 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 40308 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.024559 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.032925 systemd-logind[1552]: New session 1 of user core. May 13 00:20:41.034010 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:20:41.044571 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:20:41.057022 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:20:41.070643 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:20:41.073515 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:20:41.172274 systemd[1692]: Queued start job for default target default.target. May 13 00:20:41.172725 systemd[1692]: Created slice app.slice - User Application Slice. May 13 00:20:41.172747 systemd[1692]: Reached target paths.target - Paths. May 13 00:20:41.172759 systemd[1692]: Reached target timers.target - Timers. May 13 00:20:41.185520 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:20:41.193501 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:20:41.193582 systemd[1692]: Reached target sockets.target - Sockets. May 13 00:20:41.193599 systemd[1692]: Reached target basic.target - Basic System. May 13 00:20:41.193643 systemd[1692]: Reached target default.target - Main User Target. May 13 00:20:41.193684 systemd[1692]: Startup finished in 114ms. May 13 00:20:41.194185 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:20:41.196035 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:20:41.263730 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:40320.service - OpenSSH per-connection server daemon (10.0.0.1:40320). May 13 00:20:41.299144 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 40320 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.300725 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.305041 systemd-logind[1552]: New session 2 of user core. May 13 00:20:41.314689 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:20:41.369523 sshd[1704]: pam_unix(sshd:session): session closed for user core May 13 00:20:41.381606 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). May 13 00:20:41.382097 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:40320.service: Deactivated successfully. May 13 00:20:41.384294 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. May 13 00:20:41.385779 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:20:41.387212 systemd-logind[1552]: Removed session 2. May 13 00:20:41.417237 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.419097 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.423565 systemd-logind[1552]: New session 3 of user core. May 13 00:20:41.443756 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:20:41.495127 sshd[1709]: pam_unix(sshd:session): session closed for user core May 13 00:20:41.506660 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:40342.service - OpenSSH per-connection server daemon (10.0.0.1:40342). May 13 00:20:41.507424 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:40332.service: Deactivated successfully. May 13 00:20:41.509410 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:20:41.510212 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. May 13 00:20:41.511849 systemd-logind[1552]: Removed session 3. May 13 00:20:41.544654 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 40342 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.546506 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.550855 systemd-logind[1552]: New session 4 of user core. May 13 00:20:41.560717 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:20:41.617921 sshd[1718]: pam_unix(sshd:session): session closed for user core May 13 00:20:41.638783 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:40352.service - OpenSSH per-connection server daemon (10.0.0.1:40352). May 13 00:20:41.639340 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:40342.service: Deactivated successfully. May 13 00:20:41.641965 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. May 13 00:20:41.643099 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:20:41.644114 systemd-logind[1552]: Removed session 4. May 13 00:20:41.674759 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 40352 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.676451 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.680592 systemd-logind[1552]: New session 5 of user core. May 13 00:20:41.690691 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:20:41.749650 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:20:41.750003 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:41.776734 sudo[1732]: pam_unix(sudo:session): session closed for user root May 13 00:20:41.778620 sshd[1725]: pam_unix(sshd:session): session closed for user core May 13 00:20:41.786584 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:40354.service - OpenSSH per-connection server daemon (10.0.0.1:40354). May 13 00:20:41.787038 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:40352.service: Deactivated successfully. May 13 00:20:41.789359 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. May 13 00:20:41.790197 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:20:41.791416 systemd-logind[1552]: Removed session 5. May 13 00:20:41.824766 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 40354 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:41.826457 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:41.830410 systemd-logind[1552]: New session 6 of user core. May 13 00:20:41.838621 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:20:41.892958 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:20:41.893311 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:41.897682 sudo[1742]: pam_unix(sudo:session): session closed for user root May 13 00:20:41.904259 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:20:41.904645 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:41.929613 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:20:41.931319 auditctl[1745]: No rules May 13 00:20:41.932652 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:20:41.933035 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:20:41.935139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:20:41.968232 augenrules[1764]: No rules May 13 00:20:41.970131 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:20:41.971585 sudo[1741]: pam_unix(sudo:session): session closed for user root May 13 00:20:41.973577 sshd[1734]: pam_unix(sshd:session): session closed for user core May 13 00:20:41.990648 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:40358.service - OpenSSH per-connection server daemon (10.0.0.1:40358). May 13 00:20:41.991188 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:40354.service: Deactivated successfully. May 13 00:20:41.993061 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:20:41.993744 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. May 13 00:20:41.994903 systemd-logind[1552]: Removed session 6. May 13 00:20:42.024080 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 40358 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:20:42.025736 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:20:42.029741 systemd-logind[1552]: New session 7 of user core. May 13 00:20:42.039632 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:20:42.093934 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:20:42.094547 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:20:42.371625 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:20:42.371882 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:20:42.637288 dockerd[1796]: time="2025-05-13T00:20:42.637145732Z" level=info msg="Starting up" May 13 00:20:43.341308 dockerd[1796]: time="2025-05-13T00:20:43.341262253Z" level=info msg="Loading containers: start." May 13 00:20:43.454418 kernel: Initializing XFRM netlink socket May 13 00:20:43.537533 systemd-networkd[1247]: docker0: Link UP May 13 00:20:43.561398 dockerd[1796]: time="2025-05-13T00:20:43.561324847Z" level=info msg="Loading containers: done." May 13 00:20:43.577499 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck294969746-merged.mount: Deactivated successfully. May 13 00:20:43.580201 dockerd[1796]: time="2025-05-13T00:20:43.580154747Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:20:43.580279 dockerd[1796]: time="2025-05-13T00:20:43.580258161Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:20:43.580407 dockerd[1796]: time="2025-05-13T00:20:43.580370922Z" level=info msg="Daemon has completed initialization" May 13 00:20:43.791608 dockerd[1796]: time="2025-05-13T00:20:43.791470395Z" level=info msg="API listen on /run/docker.sock" May 13 00:20:43.792299 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:20:44.533349 containerd[1571]: time="2025-05-13T00:20:44.533138512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:20:45.300488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360159172.mount: Deactivated successfully. May 13 00:20:46.565829 containerd[1571]: time="2025-05-13T00:20:46.565766461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:46.566622 containerd[1571]: time="2025-05-13T00:20:46.566560120Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 00:20:46.567851 containerd[1571]: time="2025-05-13T00:20:46.567800697Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:46.570349 containerd[1571]: time="2025-05-13T00:20:46.570305715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:46.571536 containerd[1571]: time="2025-05-13T00:20:46.571493033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.03831693s" May 13 00:20:46.571536 containerd[1571]: time="2025-05-13T00:20:46.571534180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:20:46.594241 containerd[1571]: time="2025-05-13T00:20:46.594206527Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:20:48.099434 containerd[1571]: time="2025-05-13T00:20:48.099345848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:48.100240 containerd[1571]: time="2025-05-13T00:20:48.100203436Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 00:20:48.103811 containerd[1571]: time="2025-05-13T00:20:48.101661241Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:48.105110 containerd[1571]: time="2025-05-13T00:20:48.105071467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:48.106594 containerd[1571]: time="2025-05-13T00:20:48.106556172Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.512310322s" May 13 00:20:48.106594 containerd[1571]: time="2025-05-13T00:20:48.106593122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:20:48.128578 containerd[1571]: time="2025-05-13T00:20:48.128538004Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:20:48.164917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:20:48.172526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:20:48.320896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:20:48.325495 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:20:48.557914 kubelet[2035]: E0513 00:20:48.557772 2035 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:48.564504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:48.564783 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:50.111840 containerd[1571]: time="2025-05-13T00:20:50.111770982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:50.164509 containerd[1571]: time="2025-05-13T00:20:50.164441166Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 00:20:50.224896 containerd[1571]: time="2025-05-13T00:20:50.224846159Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:50.244461 containerd[1571]: time="2025-05-13T00:20:50.244418180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:50.245555 containerd[1571]: time="2025-05-13T00:20:50.245506892Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.116935265s" May 13 00:20:50.245616 containerd[1571]: time="2025-05-13T00:20:50.245555053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:20:50.270733 containerd[1571]: time="2025-05-13T00:20:50.270690930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:20:53.304920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233136259.mount: Deactivated successfully. May 13 00:20:55.225126 containerd[1571]: time="2025-05-13T00:20:55.225049912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:55.225880 containerd[1571]: time="2025-05-13T00:20:55.225843691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 00:20:55.227083 containerd[1571]: time="2025-05-13T00:20:55.227056245Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:55.229141 containerd[1571]: time="2025-05-13T00:20:55.229110789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:55.229691 containerd[1571]: time="2025-05-13T00:20:55.229662734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 4.958933742s" May 13 00:20:55.229722 containerd[1571]: time="2025-05-13T00:20:55.229689615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:20:55.251129 containerd[1571]: time="2025-05-13T00:20:55.251087030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:20:56.813625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183901381.mount: Deactivated successfully. May 13 00:20:57.474680 containerd[1571]: time="2025-05-13T00:20:57.474618960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:57.475513 containerd[1571]: time="2025-05-13T00:20:57.475482260Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 00:20:57.476754 containerd[1571]: time="2025-05-13T00:20:57.476707077Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:57.479405 containerd[1571]: time="2025-05-13T00:20:57.479355264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:57.482502 containerd[1571]: time="2025-05-13T00:20:57.481465162Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.230341062s" May 13 00:20:57.482502 containerd[1571]: time="2025-05-13T00:20:57.481504876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:20:57.504094 containerd[1571]: time="2025-05-13T00:20:57.504052199Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:20:58.643933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:20:58.653537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:20:58.789469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:20:58.795501 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:20:58.840758 kubelet[2132]: E0513 00:20:58.840690 2132 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:20:58.845161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:20:58.845446 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:20:59.249783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492393156.mount: Deactivated successfully. May 13 00:20:59.255754 containerd[1571]: time="2025-05-13T00:20:59.255713344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:59.256683 containerd[1571]: time="2025-05-13T00:20:59.256646675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 00:20:59.257916 containerd[1571]: time="2025-05-13T00:20:59.257889366Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:59.260188 containerd[1571]: time="2025-05-13T00:20:59.260142672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:20:59.260762 containerd[1571]: time="2025-05-13T00:20:59.260735404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.756639724s" May 13 00:20:59.260806 containerd[1571]: time="2025-05-13T00:20:59.260763006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:20:59.280708 containerd[1571]: time="2025-05-13T00:20:59.280657903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:21:00.332202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1671410672.mount: Deactivated successfully. May 13 00:21:02.209832 containerd[1571]: time="2025-05-13T00:21:02.209750644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:02.210608 containerd[1571]: time="2025-05-13T00:21:02.210526419Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 00:21:02.212032 containerd[1571]: time="2025-05-13T00:21:02.211981398Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:02.214980 containerd[1571]: time="2025-05-13T00:21:02.214928827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:02.216128 containerd[1571]: time="2025-05-13T00:21:02.216085837Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.93538324s" May 13 00:21:02.216168 containerd[1571]: time="2025-05-13T00:21:02.216129108Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:21:04.291504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:04.303599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:04.321071 systemd[1]: Reloading requested from client PID 2283 ('systemctl') (unit session-7.scope)... May 13 00:21:04.321088 systemd[1]: Reloading... May 13 00:21:04.404447 zram_generator::config[2325]: No configuration found. May 13 00:21:04.757847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:04.833749 systemd[1]: Reloading finished in 512 ms. May 13 00:21:04.875771 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:21:04.875874 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:21:04.876224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:04.878016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:05.019940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:05.025156 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:21:05.065379 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:05.065379 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:21:05.065379 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:05.065800 kubelet[2382]: I0513 00:21:05.065443 2382 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:21:05.274097 kubelet[2382]: I0513 00:21:05.273968 2382 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:21:05.274097 kubelet[2382]: I0513 00:21:05.274004 2382 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:21:05.274275 kubelet[2382]: I0513 00:21:05.274251 2382 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:21:05.286845 kubelet[2382]: I0513 00:21:05.286783 2382 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:21:05.287481 kubelet[2382]: E0513 00:21:05.287262 2382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.297428 kubelet[2382]: I0513 00:21:05.297408 2382 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:21:05.298248 kubelet[2382]: I0513 00:21:05.298200 2382 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:21:05.298436 kubelet[2382]: I0513 00:21:05.298233 2382 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:21:05.298518 kubelet[2382]: I0513 00:21:05.298443 2382 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:21:05.298518 kubelet[2382]: I0513 00:21:05.298452 2382 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:21:05.298592 kubelet[2382]: I0513 00:21:05.298575 2382 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:05.299184 kubelet[2382]: I0513 00:21:05.299151 2382 kubelet.go:400] "Attempting to sync node with API server" May 13 00:21:05.299184 kubelet[2382]: I0513 00:21:05.299175 2382 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:21:05.299248 kubelet[2382]: I0513 00:21:05.299194 2382 kubelet.go:312] "Adding apiserver pod source" May 13 00:21:05.299248 kubelet[2382]: I0513 00:21:05.299213 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:21:05.303097 kubelet[2382]: W0513 00:21:05.302628 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.303097 kubelet[2382]: E0513 00:21:05.302686 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.303097 kubelet[2382]: W0513 00:21:05.302738 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.303097 kubelet[2382]: E0513 00:21:05.302767 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.305411 kubelet[2382]: I0513 00:21:05.305360 2382 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:21:05.307094 kubelet[2382]: I0513 00:21:05.307071 2382 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:21:05.307175 kubelet[2382]: W0513 00:21:05.307141 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:21:05.308128 kubelet[2382]: I0513 00:21:05.308104 2382 server.go:1264] "Started kubelet" May 13 00:21:05.308557 kubelet[2382]: I0513 00:21:05.308168 2382 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:21:05.309115 kubelet[2382]: I0513 00:21:05.309082 2382 server.go:455] "Adding debug handlers to kubelet server" May 13 00:21:05.310461 kubelet[2382]: I0513 00:21:05.309995 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:21:05.310461 kubelet[2382]: I0513 00:21:05.310248 2382 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:21:05.310754 kubelet[2382]: I0513 00:21:05.310732 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:21:05.312593 kubelet[2382]: E0513 00:21:05.312574 2382 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:21:05.312802 kubelet[2382]: E0513 00:21:05.312780 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" May 13 00:21:05.312887 kubelet[2382]: I0513 00:21:05.312877 2382 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:21:05.312956 kubelet[2382]: E0513 00:21:05.312059 2382 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:21:05.312994 kubelet[2382]: I0513 00:21:05.312964 2382 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:21:05.313073 kubelet[2382]: I0513 00:21:05.313051 2382 factory.go:221] Registration of the systemd container factory successfully May 13 00:21:05.313158 kubelet[2382]: I0513 00:21:05.313137 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:21:05.313287 kubelet[2382]: W0513 00:21:05.313251 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.313319 kubelet[2382]: E0513 00:21:05.313293 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.313319 kubelet[2382]: I0513 00:21:05.313144 2382 reconciler.go:26] "Reconciler: start to sync state" May 13 00:21:05.314408 kubelet[2382]: E0513 00:21:05.314302 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee4047502c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:21:05.308085334 +0000 UTC m=+0.279008922,LastTimestamp:2025-05-13 00:21:05.308085334 +0000 UTC m=+0.279008922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:21:05.315121 kubelet[2382]: I0513 00:21:05.315097 2382 factory.go:221] Registration of the containerd container factory successfully May 13 00:21:05.327900 kubelet[2382]: I0513 00:21:05.327854 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:21:05.329411 kubelet[2382]: I0513 00:21:05.329247 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:21:05.329411 kubelet[2382]: I0513 00:21:05.329274 2382 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:21:05.329411 kubelet[2382]: I0513 00:21:05.329289 2382 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:21:05.329411 kubelet[2382]: E0513 00:21:05.329335 2382 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:21:05.329917 kubelet[2382]: W0513 00:21:05.329894 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.329951 kubelet[2382]: E0513 00:21:05.329920 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:05.339271 kubelet[2382]: I0513 00:21:05.339246 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:21:05.339271 kubelet[2382]: I0513 00:21:05.339264 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:21:05.339342 kubelet[2382]: I0513 00:21:05.339280 2382 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:05.414523 kubelet[2382]: I0513 00:21:05.414482 2382 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:05.414868 kubelet[2382]: E0513 00:21:05.414844 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" May 13 00:21:05.429961 kubelet[2382]: E0513 00:21:05.429928 2382 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:21:05.513368 kubelet[2382]: E0513 00:21:05.513333 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" May 13 00:21:05.616655 kubelet[2382]: I0513 00:21:05.616567 2382 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:05.616859 kubelet[2382]: E0513 00:21:05.616822 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" May 13 00:21:05.630024 kubelet[2382]: E0513 00:21:05.629972 2382 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:21:05.914706 kubelet[2382]: E0513 00:21:05.914658 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" May 13 00:21:06.018378 kubelet[2382]: I0513 00:21:06.018327 2382 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:06.018716 kubelet[2382]: E0513 00:21:06.018678 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" May 13 00:21:06.024147 kubelet[2382]: E0513 00:21:06.024056 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee4047502c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:21:05.308085334 +0000 UTC m=+0.279008922,LastTimestamp:2025-05-13 00:21:05.308085334 +0000 UTC m=+0.279008922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:21:06.030183 kubelet[2382]: E0513 00:21:06.030156 2382 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:21:06.104745 kubelet[2382]: W0513 00:21:06.104687 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.104745 kubelet[2382]: E0513 00:21:06.104740 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.113067 kubelet[2382]: W0513 00:21:06.113027 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.113067 kubelet[2382]: E0513 00:21:06.113063 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.194062 kubelet[2382]: W0513 00:21:06.193925 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.194062 kubelet[2382]: E0513 00:21:06.193963 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.379547 kubelet[2382]: I0513 00:21:06.379498 2382 policy_none.go:49] "None policy: Start" May 13 00:21:06.380053 kubelet[2382]: I0513 00:21:06.380012 2382 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:21:06.380053 kubelet[2382]: I0513 00:21:06.380047 2382 state_mem.go:35] "Initializing new in-memory state store" May 13 00:21:06.386844 kubelet[2382]: I0513 00:21:06.386813 2382 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:21:06.387087 kubelet[2382]: I0513 00:21:06.387042 2382 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:21:06.387192 kubelet[2382]: I0513 00:21:06.387174 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:21:06.388770 kubelet[2382]: E0513 00:21:06.388737 2382 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:21:06.715813 kubelet[2382]: E0513 00:21:06.715756 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" May 13 00:21:06.785293 kubelet[2382]: W0513 00:21:06.785225 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.785343 kubelet[2382]: E0513 00:21:06.785302 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:06.820005 kubelet[2382]: I0513 00:21:06.819966 2382 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:06.820248 kubelet[2382]: E0513 00:21:06.820216 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" May 13 00:21:06.830573 kubelet[2382]: I0513 00:21:06.830528 2382 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:21:06.831321 kubelet[2382]: I0513 00:21:06.831279 2382 topology_manager.go:215] "Topology Admit Handler" podUID="ddf9daec16b2c7128bfd96ee55e4b30f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:21:06.832308 kubelet[2382]: I0513 00:21:06.832254 2382 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:21:06.923193 kubelet[2382]: I0513 00:21:06.923123 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:21:06.923193 kubelet[2382]: I0513 00:21:06.923172 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:06.923193 kubelet[2382]: I0513 00:21:06.923189 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:06.923193 kubelet[2382]: I0513 00:21:06.923204 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:06.923464 kubelet[2382]: I0513 00:21:06.923221 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:06.923464 kubelet[2382]: I0513 00:21:06.923237 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:06.923464 kubelet[2382]: I0513 00:21:06.923251 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:06.923464 kubelet[2382]: I0513 00:21:06.923268 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:06.923464 kubelet[2382]: I0513 00:21:06.923295 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:07.136235 kubelet[2382]: E0513 00:21:07.136134 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:07.136235 kubelet[2382]: E0513 00:21:07.136138 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:07.136959 containerd[1571]: time="2025-05-13T00:21:07.136926828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddf9daec16b2c7128bfd96ee55e4b30f,Namespace:kube-system,Attempt:0,}" May 13 00:21:07.137233 containerd[1571]: time="2025-05-13T00:21:07.136973375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:21:07.137991 kubelet[2382]: E0513 00:21:07.137976 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:07.138323 containerd[1571]: time="2025-05-13T00:21:07.138222568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:21:07.321294 kubelet[2382]: E0513 00:21:07.321237 2382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:07.807304 kubelet[2382]: W0513 00:21:07.807249 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:07.807304 kubelet[2382]: E0513 00:21:07.807296 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused May 13 00:21:07.898326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068769425.mount: Deactivated successfully. May 13 00:21:07.905441 containerd[1571]: time="2025-05-13T00:21:07.905378684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:07.907577 containerd[1571]: time="2025-05-13T00:21:07.907526733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:21:07.908679 containerd[1571]: time="2025-05-13T00:21:07.908642666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:07.909781 containerd[1571]: time="2025-05-13T00:21:07.909744754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:07.910961 containerd[1571]: time="2025-05-13T00:21:07.910923294Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:07.911978 containerd[1571]: time="2025-05-13T00:21:07.911942827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:21:07.912928 containerd[1571]: time="2025-05-13T00:21:07.912887529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:21:07.915582 containerd[1571]: time="2025-05-13T00:21:07.915543591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:21:07.917690 containerd[1571]: time="2025-05-13T00:21:07.917643199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 780.641951ms" May 13 00:21:07.918234 containerd[1571]: time="2025-05-13T00:21:07.918203129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.183166ms" May 13 00:21:07.918920 containerd[1571]: time="2025-05-13T00:21:07.918879688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 780.613578ms" May 13 00:21:08.093207 containerd[1571]: time="2025-05-13T00:21:08.093006079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:08.094575 containerd[1571]: time="2025-05-13T00:21:08.094480956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095534783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095579777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095590046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095668293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.094928896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.094966657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.094976696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095060112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095309119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.096265 containerd[1571]: time="2025-05-13T00:21:08.095454482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:08.153081 containerd[1571]: time="2025-05-13T00:21:08.153041379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fe40c4ba359a5e88b4e932e633615a3f9c6ccf489363ffa8bd0e80ec7eb7347\"" May 13 00:21:08.154146 kubelet[2382]: E0513 00:21:08.154106 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.156864 containerd[1571]: time="2025-05-13T00:21:08.156831528Z" level=info msg="CreateContainer within sandbox \"0fe40c4ba359a5e88b4e932e633615a3f9c6ccf489363ffa8bd0e80ec7eb7347\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:21:08.157713 containerd[1571]: time="2025-05-13T00:21:08.157661454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfbef325bbd468bf836fa2af6d6e3ca32b5cebb28469b257201040cf6a8ff58b\"" May 13 00:21:08.158326 kubelet[2382]: E0513 00:21:08.158292 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.160325 containerd[1571]: time="2025-05-13T00:21:08.160302057Z" level=info msg="CreateContainer within sandbox \"dfbef325bbd468bf836fa2af6d6e3ca32b5cebb28469b257201040cf6a8ff58b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:21:08.162794 containerd[1571]: time="2025-05-13T00:21:08.162765107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddf9daec16b2c7128bfd96ee55e4b30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d26dacffeae736292b7c514783eef82438b5feea22b779369a492aaf4a48ab30\"" May 13 00:21:08.164019 kubelet[2382]: E0513 00:21:08.163992 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.165664 containerd[1571]: time="2025-05-13T00:21:08.165567163Z" level=info msg="CreateContainer within sandbox \"d26dacffeae736292b7c514783eef82438b5feea22b779369a492aaf4a48ab30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:21:08.181517 containerd[1571]: time="2025-05-13T00:21:08.181479821Z" level=info msg="CreateContainer within sandbox \"dfbef325bbd468bf836fa2af6d6e3ca32b5cebb28469b257201040cf6a8ff58b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98c276e9543cc256bcca028177c1ed9d255e0802f6a978a3af93295dbbe87465\"" May 13 00:21:08.181948 containerd[1571]: time="2025-05-13T00:21:08.181913184Z" level=info msg="StartContainer for \"98c276e9543cc256bcca028177c1ed9d255e0802f6a978a3af93295dbbe87465\"" May 13 00:21:08.183696 containerd[1571]: time="2025-05-13T00:21:08.183666643Z" level=info msg="CreateContainer within sandbox \"0fe40c4ba359a5e88b4e932e633615a3f9c6ccf489363ffa8bd0e80ec7eb7347\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5779bbb8a55549bedff10422c63253a0a3e76df9272874c9ca7f32d1c3750610\"" May 13 00:21:08.184090 containerd[1571]: time="2025-05-13T00:21:08.184054380Z" level=info msg="StartContainer for \"5779bbb8a55549bedff10422c63253a0a3e76df9272874c9ca7f32d1c3750610\"" May 13 00:21:08.192646 containerd[1571]: time="2025-05-13T00:21:08.192603396Z" level=info msg="CreateContainer within sandbox \"d26dacffeae736292b7c514783eef82438b5feea22b779369a492aaf4a48ab30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"096112fff9ef321dda5ec402ba7dcbf7d79776b05f5463ff8dce606629c75780\"" May 13 00:21:08.193416 containerd[1571]: time="2025-05-13T00:21:08.193216566Z" level=info msg="StartContainer for \"096112fff9ef321dda5ec402ba7dcbf7d79776b05f5463ff8dce606629c75780\"" May 13 00:21:08.267484 containerd[1571]: time="2025-05-13T00:21:08.267442202Z" level=info msg="StartContainer for \"096112fff9ef321dda5ec402ba7dcbf7d79776b05f5463ff8dce606629c75780\" returns successfully" May 13 00:21:08.267772 containerd[1571]: time="2025-05-13T00:21:08.267614775Z" level=info msg="StartContainer for \"5779bbb8a55549bedff10422c63253a0a3e76df9272874c9ca7f32d1c3750610\" returns successfully" May 13 00:21:08.277670 containerd[1571]: time="2025-05-13T00:21:08.277625663Z" level=info msg="StartContainer for \"98c276e9543cc256bcca028177c1ed9d255e0802f6a978a3af93295dbbe87465\" returns successfully" May 13 00:21:08.351610 kubelet[2382]: E0513 00:21:08.348330 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.352369 kubelet[2382]: E0513 00:21:08.352090 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.354676 kubelet[2382]: E0513 00:21:08.354653 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:08.423402 kubelet[2382]: I0513 00:21:08.421800 2382 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:09.113893 kubelet[2382]: E0513 00:21:09.113850 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:21:09.211943 kubelet[2382]: I0513 00:21:09.211903 2382 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:21:09.304443 kubelet[2382]: I0513 00:21:09.304380 2382 apiserver.go:52] "Watching apiserver" May 13 00:21:09.314008 kubelet[2382]: I0513 00:21:09.313985 2382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:21:09.360357 kubelet[2382]: E0513 00:21:09.360316 2382 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:21:09.360760 kubelet[2382]: E0513 00:21:09.360733 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:10.956351 systemd[1]: Reloading requested from client PID 2657 ('systemctl') (unit session-7.scope)... May 13 00:21:10.956369 systemd[1]: Reloading... May 13 00:21:11.027415 zram_generator::config[2699]: No configuration found. May 13 00:21:11.149529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:11.232659 systemd[1]: Reloading finished in 275 ms. May 13 00:21:11.270171 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:11.289190 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:21:11.289623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:11.301918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:11.448056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:11.452698 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:21:11.491735 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:11.491735 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:21:11.491735 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:21:11.491735 kubelet[2751]: I0513 00:21:11.490442 2751 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:21:11.495124 kubelet[2751]: I0513 00:21:11.495100 2751 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:21:11.495124 kubelet[2751]: I0513 00:21:11.495120 2751 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:21:11.495298 kubelet[2751]: I0513 00:21:11.495269 2751 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:21:11.496595 kubelet[2751]: I0513 00:21:11.496570 2751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:21:11.498397 kubelet[2751]: I0513 00:21:11.498088 2751 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:21:11.506423 kubelet[2751]: I0513 00:21:11.506400 2751 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:21:11.506923 kubelet[2751]: I0513 00:21:11.506887 2751 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:21:11.507061 kubelet[2751]: I0513 00:21:11.506917 2751 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:21:11.507137 kubelet[2751]: I0513 00:21:11.507075 2751 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:21:11.507137 kubelet[2751]: I0513 00:21:11.507084 2751 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:21:11.507137 kubelet[2751]: I0513 00:21:11.507122 2751 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:11.507229 kubelet[2751]: I0513 00:21:11.507216 2751 kubelet.go:400] "Attempting to sync node with API server" May 13 00:21:11.507229 kubelet[2751]: I0513 00:21:11.507228 2751 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:21:11.507271 kubelet[2751]: I0513 00:21:11.507249 2751 kubelet.go:312] "Adding apiserver pod source" May 13 00:21:11.507271 kubelet[2751]: I0513 00:21:11.507267 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.507755 2751 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.507911 2751 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.508574 2751 server.go:1264] "Started kubelet" May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.508747 2751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.508861 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.509207 2751 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:21:11.509983 kubelet[2751]: I0513 00:21:11.509690 2751 server.go:455] "Adding debug handlers to kubelet server" May 13 00:21:11.513592 kubelet[2751]: I0513 00:21:11.513566 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:21:11.519523 kubelet[2751]: I0513 00:21:11.519499 2751 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:21:11.520531 kubelet[2751]: I0513 00:21:11.520504 2751 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:21:11.520771 kubelet[2751]: I0513 00:21:11.520750 2751 reconciler.go:26] "Reconciler: start to sync state" May 13 00:21:11.521712 kubelet[2751]: I0513 00:21:11.521686 2751 factory.go:221] Registration of the systemd container factory successfully May 13 00:21:11.521798 kubelet[2751]: I0513 00:21:11.521771 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:21:11.523016 kubelet[2751]: E0513 00:21:11.522991 2751 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:21:11.523593 kubelet[2751]: I0513 00:21:11.523555 2751 factory.go:221] Registration of the containerd container factory successfully May 13 00:21:11.527336 kubelet[2751]: I0513 00:21:11.527301 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:21:11.528488 kubelet[2751]: I0513 00:21:11.528466 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:21:11.528488 kubelet[2751]: I0513 00:21:11.528489 2751 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:21:11.528554 kubelet[2751]: I0513 00:21:11.528503 2751 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:21:11.528593 kubelet[2751]: E0513 00:21:11.528562 2751 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:21:11.567304 kubelet[2751]: I0513 00:21:11.567263 2751 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:21:11.567304 kubelet[2751]: I0513 00:21:11.567287 2751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:21:11.567304 kubelet[2751]: I0513 00:21:11.567304 2751 state_mem.go:36] "Initialized new in-memory state store" May 13 00:21:11.567520 kubelet[2751]: I0513 00:21:11.567457 2751 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:21:11.567520 kubelet[2751]: I0513 00:21:11.567467 2751 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:21:11.567520 kubelet[2751]: I0513 00:21:11.567484 2751 policy_none.go:49] "None policy: Start" May 13 00:21:11.568009 kubelet[2751]: I0513 00:21:11.567991 2751 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:21:11.568009 kubelet[2751]: I0513 00:21:11.568011 2751 state_mem.go:35] "Initializing new in-memory state store" May 13 00:21:11.568169 kubelet[2751]: I0513 00:21:11.568153 2751 state_mem.go:75] "Updated machine memory state" May 13 00:21:11.569796 kubelet[2751]: I0513 00:21:11.569770 2751 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:21:11.569976 kubelet[2751]: I0513 00:21:11.569944 2751 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:21:11.570053 kubelet[2751]: I0513 00:21:11.570039 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:21:11.624374 kubelet[2751]: I0513 00:21:11.624337 2751 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:21:11.629670 kubelet[2751]: I0513 00:21:11.629618 2751 topology_manager.go:215] "Topology Admit Handler" podUID="ddf9daec16b2c7128bfd96ee55e4b30f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:21:11.629731 kubelet[2751]: I0513 00:21:11.629712 2751 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:21:11.629793 kubelet[2751]: I0513 00:21:11.629769 2751 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:21:11.721631 kubelet[2751]: I0513 00:21:11.721512 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:11.721631 kubelet[2751]: I0513 00:21:11.721547 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:11.721631 kubelet[2751]: I0513 00:21:11.721568 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:21:11.721631 kubelet[2751]: I0513 00:21:11.721583 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:11.721631 kubelet[2751]: I0513 00:21:11.721597 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:11.722048 kubelet[2751]: I0513 00:21:11.721611 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:11.722048 kubelet[2751]: I0513 00:21:11.721628 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:21:11.722048 kubelet[2751]: I0513 00:21:11.721643 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:11.722048 kubelet[2751]: I0513 00:21:11.721657 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddf9daec16b2c7128bfd96ee55e4b30f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddf9daec16b2c7128bfd96ee55e4b30f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:21:11.723724 kubelet[2751]: I0513 00:21:11.723522 2751 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:21:11.723724 kubelet[2751]: I0513 00:21:11.723608 2751 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:21:12.022836 kubelet[2751]: E0513 00:21:12.022739 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.022836 kubelet[2751]: E0513 00:21:12.022738 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.022998 kubelet[2751]: E0513 00:21:12.022871 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.509850 kubelet[2751]: I0513 00:21:12.507854 2751 apiserver.go:52] "Watching apiserver" May 13 00:21:12.522534 kubelet[2751]: I0513 00:21:12.522489 2751 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:21:12.543833 kubelet[2751]: E0513 00:21:12.541415 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.543833 kubelet[2751]: E0513 00:21:12.542320 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.548323 kubelet[2751]: E0513 00:21:12.548278 2751 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:21:12.549412 kubelet[2751]: E0513 00:21:12.548686 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:12.566034 kubelet[2751]: I0513 00:21:12.565057 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.565038045 podStartE2EDuration="1.565038045s" podCreationTimestamp="2025-05-13 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:12.564937503 +0000 UTC m=+1.108399923" watchObservedRunningTime="2025-05-13 00:21:12.565038045 +0000 UTC m=+1.108500455" May 13 00:21:12.575678 kubelet[2751]: I0513 00:21:12.575615 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.575595095 podStartE2EDuration="1.575595095s" podCreationTimestamp="2025-05-13 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:12.575540018 +0000 UTC m=+1.119002438" watchObservedRunningTime="2025-05-13 00:21:12.575595095 +0000 UTC m=+1.119057515" May 13 00:21:12.581375 kubelet[2751]: I0513 00:21:12.581314 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.581288084 podStartE2EDuration="1.581288084s" podCreationTimestamp="2025-05-13 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:12.581132134 +0000 UTC m=+1.124594544" watchObservedRunningTime="2025-05-13 00:21:12.581288084 +0000 UTC m=+1.124750504" May 13 00:21:13.542910 kubelet[2751]: E0513 00:21:13.542876 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:14.543968 kubelet[2751]: E0513 00:21:14.543927 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:16.402768 sudo[1777]: pam_unix(sudo:session): session closed for user root May 13 00:21:16.404590 sshd[1770]: pam_unix(sshd:session): session closed for user core May 13 00:21:16.408506 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:40358.service: Deactivated successfully. May 13 00:21:16.410514 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. May 13 00:21:16.410637 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:21:16.411844 systemd-logind[1552]: Removed session 7. May 13 00:21:20.226360 kubelet[2751]: E0513 00:21:20.226311 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:20.552023 kubelet[2751]: E0513 00:21:20.551876 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:21.594081 kubelet[2751]: E0513 00:21:21.593767 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:22.183627 update_engine[1556]: I20250513 00:21:22.183519 1556 update_attempter.cc:509] Updating boot flags... May 13 00:21:22.211697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2843) May 13 00:21:22.244417 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2845) May 13 00:21:22.275443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2845) May 13 00:21:22.554341 kubelet[2751]: E0513 00:21:22.554215 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:22.932704 kubelet[2751]: E0513 00:21:22.932630 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:24.944537 kubelet[2751]: I0513 00:21:24.944485 2751 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:21:24.944984 containerd[1571]: time="2025-05-13T00:21:24.944947738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:21:24.945252 kubelet[2751]: I0513 00:21:24.945144 2751 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:21:25.807845 kubelet[2751]: I0513 00:21:25.807785 2751 topology_manager.go:215] "Topology Admit Handler" podUID="6706936b-049e-4c21-b1f9-9e5162be49d5" podNamespace="kube-system" podName="kube-proxy-xlvl9" May 13 00:21:25.860548 kubelet[2751]: I0513 00:21:25.860498 2751 topology_manager.go:215] "Topology Admit Handler" podUID="7ced4568-41b4-436e-a6ad-0f0af0c2c4ce" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-cdlhq" May 13 00:21:26.005697 kubelet[2751]: I0513 00:21:26.005663 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ced4568-41b4-436e-a6ad-0f0af0c2c4ce-var-lib-calico\") pod \"tigera-operator-797db67f8-cdlhq\" (UID: \"7ced4568-41b4-436e-a6ad-0f0af0c2c4ce\") " pod="tigera-operator/tigera-operator-797db67f8-cdlhq" May 13 00:21:26.005697 kubelet[2751]: I0513 00:21:26.005698 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6706936b-049e-4c21-b1f9-9e5162be49d5-lib-modules\") pod \"kube-proxy-xlvl9\" (UID: \"6706936b-049e-4c21-b1f9-9e5162be49d5\") " pod="kube-system/kube-proxy-xlvl9" May 13 00:21:26.006146 kubelet[2751]: I0513 00:21:26.005719 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6706936b-049e-4c21-b1f9-9e5162be49d5-xtables-lock\") pod \"kube-proxy-xlvl9\" (UID: \"6706936b-049e-4c21-b1f9-9e5162be49d5\") " pod="kube-system/kube-proxy-xlvl9" May 13 00:21:26.006146 kubelet[2751]: I0513 00:21:26.005739 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrktf\" (UniqueName: \"kubernetes.io/projected/7ced4568-41b4-436e-a6ad-0f0af0c2c4ce-kube-api-access-jrktf\") pod \"tigera-operator-797db67f8-cdlhq\" (UID: \"7ced4568-41b4-436e-a6ad-0f0af0c2c4ce\") " pod="tigera-operator/tigera-operator-797db67f8-cdlhq" May 13 00:21:26.006146 kubelet[2751]: I0513 00:21:26.005769 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6706936b-049e-4c21-b1f9-9e5162be49d5-kube-proxy\") pod \"kube-proxy-xlvl9\" (UID: \"6706936b-049e-4c21-b1f9-9e5162be49d5\") " pod="kube-system/kube-proxy-xlvl9" May 13 00:21:26.006146 kubelet[2751]: I0513 00:21:26.005785 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhzd\" (UniqueName: \"kubernetes.io/projected/6706936b-049e-4c21-b1f9-9e5162be49d5-kube-api-access-zlhzd\") pod \"kube-proxy-xlvl9\" (UID: \"6706936b-049e-4c21-b1f9-9e5162be49d5\") " pod="kube-system/kube-proxy-xlvl9" May 13 00:21:26.121681 kubelet[2751]: E0513 00:21:26.121598 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:26.122109 containerd[1571]: time="2025-05-13T00:21:26.122076135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlvl9,Uid:6706936b-049e-4c21-b1f9-9e5162be49d5,Namespace:kube-system,Attempt:0,}" May 13 00:21:26.146325 containerd[1571]: time="2025-05-13T00:21:26.146245664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:26.146325 containerd[1571]: time="2025-05-13T00:21:26.146291130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:26.146325 containerd[1571]: time="2025-05-13T00:21:26.146313001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:26.146504 containerd[1571]: time="2025-05-13T00:21:26.146476010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:26.170750 containerd[1571]: time="2025-05-13T00:21:26.170694833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cdlhq,Uid:7ced4568-41b4-436e-a6ad-0f0af0c2c4ce,Namespace:tigera-operator,Attempt:0,}" May 13 00:21:26.183600 containerd[1571]: time="2025-05-13T00:21:26.183554565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlvl9,Uid:6706936b-049e-4c21-b1f9-9e5162be49d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"38939a46a09a99aeb1d97d59c6b25c0c36b7ada8419f70a08419d6e7c4d8cc01\"" May 13 00:21:26.184174 kubelet[2751]: E0513 00:21:26.184143 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:26.186488 containerd[1571]: time="2025-05-13T00:21:26.186454634Z" level=info msg="CreateContainer within sandbox \"38939a46a09a99aeb1d97d59c6b25c0c36b7ada8419f70a08419d6e7c4d8cc01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:21:26.194671 containerd[1571]: time="2025-05-13T00:21:26.194422717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:26.194671 containerd[1571]: time="2025-05-13T00:21:26.194491066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:26.194671 containerd[1571]: time="2025-05-13T00:21:26.194502578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:26.194671 containerd[1571]: time="2025-05-13T00:21:26.194633465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:26.201273 containerd[1571]: time="2025-05-13T00:21:26.201232808Z" level=info msg="CreateContainer within sandbox \"38939a46a09a99aeb1d97d59c6b25c0c36b7ada8419f70a08419d6e7c4d8cc01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23b84fc6deb0fe3199a982b8031d0389c489f693df2a4dc95b1dc0eaaf66f3a2\"" May 13 00:21:26.202201 containerd[1571]: time="2025-05-13T00:21:26.202169180Z" level=info msg="StartContainer for \"23b84fc6deb0fe3199a982b8031d0389c489f693df2a4dc95b1dc0eaaf66f3a2\"" May 13 00:21:26.247968 containerd[1571]: time="2025-05-13T00:21:26.247935469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cdlhq,Uid:7ced4568-41b4-436e-a6ad-0f0af0c2c4ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e3ee5e8573b763b8c30c5e598370d217b6241c8932ab4ae32cb3ba4c4dc34b14\"" May 13 00:21:26.251621 containerd[1571]: time="2025-05-13T00:21:26.251566922Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:21:26.265375 containerd[1571]: time="2025-05-13T00:21:26.265341867Z" level=info msg="StartContainer for \"23b84fc6deb0fe3199a982b8031d0389c489f693df2a4dc95b1dc0eaaf66f3a2\" returns successfully" May 13 00:21:26.561783 kubelet[2751]: E0513 00:21:26.561750 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:27.766742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073411819.mount: Deactivated successfully. May 13 00:21:28.071879 containerd[1571]: time="2025-05-13T00:21:28.071748279Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:28.072657 containerd[1571]: time="2025-05-13T00:21:28.072622291Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 00:21:28.073843 containerd[1571]: time="2025-05-13T00:21:28.073814435Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:28.076241 containerd[1571]: time="2025-05-13T00:21:28.076186591Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:28.076838 containerd[1571]: time="2025-05-13T00:21:28.076813856Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.825212298s" May 13 00:21:28.076880 containerd[1571]: time="2025-05-13T00:21:28.076842942Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 00:21:28.078852 containerd[1571]: time="2025-05-13T00:21:28.078826212Z" level=info msg="CreateContainer within sandbox \"e3ee5e8573b763b8c30c5e598370d217b6241c8932ab4ae32cb3ba4c4dc34b14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:21:28.092706 containerd[1571]: time="2025-05-13T00:21:28.092664066Z" level=info msg="CreateContainer within sandbox \"e3ee5e8573b763b8c30c5e598370d217b6241c8932ab4ae32cb3ba4c4dc34b14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c6a9c90377b6fbd816bfdce05629fb441e256da7e203466d5b0ce95ded5a34bc\"" May 13 00:21:28.093163 containerd[1571]: time="2025-05-13T00:21:28.093131981Z" level=info msg="StartContainer for \"c6a9c90377b6fbd816bfdce05629fb441e256da7e203466d5b0ce95ded5a34bc\"" May 13 00:21:28.476823 containerd[1571]: time="2025-05-13T00:21:28.476753987Z" level=info msg="StartContainer for \"c6a9c90377b6fbd816bfdce05629fb441e256da7e203466d5b0ce95ded5a34bc\" returns successfully" May 13 00:21:28.576850 kubelet[2751]: I0513 00:21:28.576788 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xlvl9" podStartSLOduration=3.576770908 podStartE2EDuration="3.576770908s" podCreationTimestamp="2025-05-13 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:26.572262876 +0000 UTC m=+15.115725296" watchObservedRunningTime="2025-05-13 00:21:28.576770908 +0000 UTC m=+17.120233328" May 13 00:21:28.577342 kubelet[2751]: I0513 00:21:28.576899 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-cdlhq" podStartSLOduration=1.748646439 podStartE2EDuration="3.576894021s" podCreationTimestamp="2025-05-13 00:21:25 +0000 UTC" firstStartedPulling="2025-05-13 00:21:26.249398358 +0000 UTC m=+14.792860778" lastFinishedPulling="2025-05-13 00:21:28.07764594 +0000 UTC m=+16.621108360" observedRunningTime="2025-05-13 00:21:28.576626705 +0000 UTC m=+17.120089125" watchObservedRunningTime="2025-05-13 00:21:28.576894021 +0000 UTC m=+17.120356441" May 13 00:21:30.960497 kubelet[2751]: I0513 00:21:30.960453 2751 topology_manager.go:215] "Topology Admit Handler" podUID="e6883d39-3219-409a-a957-b5a77382ee41" podNamespace="calico-system" podName="calico-typha-5574968c67-7pbcw" May 13 00:21:31.000759 kubelet[2751]: I0513 00:21:31.000572 2751 topology_manager.go:215] "Topology Admit Handler" podUID="c8904a89-2f5f-4bef-9f99-f191c064efb6" podNamespace="calico-system" podName="calico-node-l6k4r" May 13 00:21:31.138150 kubelet[2751]: I0513 00:21:31.138074 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8td\" (UniqueName: \"kubernetes.io/projected/e6883d39-3219-409a-a957-b5a77382ee41-kube-api-access-fp8td\") pod \"calico-typha-5574968c67-7pbcw\" (UID: \"e6883d39-3219-409a-a957-b5a77382ee41\") " pod="calico-system/calico-typha-5574968c67-7pbcw" May 13 00:21:31.138150 kubelet[2751]: I0513 00:21:31.138123 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-flexvol-driver-host\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138150 kubelet[2751]: I0513 00:21:31.138147 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-var-run-calico\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138372 kubelet[2751]: I0513 00:21:31.138194 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-cni-log-dir\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138372 kubelet[2751]: I0513 00:21:31.138217 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-xtables-lock\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138372 kubelet[2751]: I0513 00:21:31.138247 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-policysync\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138372 kubelet[2751]: I0513 00:21:31.138263 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-var-lib-calico\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138372 kubelet[2751]: I0513 00:21:31.138282 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-cni-bin-dir\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138511 kubelet[2751]: I0513 00:21:31.138342 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e6883d39-3219-409a-a957-b5a77382ee41-typha-certs\") pod \"calico-typha-5574968c67-7pbcw\" (UID: \"e6883d39-3219-409a-a957-b5a77382ee41\") " pod="calico-system/calico-typha-5574968c67-7pbcw" May 13 00:21:31.138511 kubelet[2751]: I0513 00:21:31.138379 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8904a89-2f5f-4bef-9f99-f191c064efb6-tigera-ca-bundle\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138511 kubelet[2751]: I0513 00:21:31.138411 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6883d39-3219-409a-a957-b5a77382ee41-tigera-ca-bundle\") pod \"calico-typha-5574968c67-7pbcw\" (UID: \"e6883d39-3219-409a-a957-b5a77382ee41\") " pod="calico-system/calico-typha-5574968c67-7pbcw" May 13 00:21:31.138511 kubelet[2751]: I0513 00:21:31.138436 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-cni-net-dir\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138511 kubelet[2751]: I0513 00:21:31.138454 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8904a89-2f5f-4bef-9f99-f191c064efb6-lib-modules\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138685 kubelet[2751]: I0513 00:21:31.138471 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c8904a89-2f5f-4bef-9f99-f191c064efb6-node-certs\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.138685 kubelet[2751]: I0513 00:21:31.138489 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gt4\" (UniqueName: \"kubernetes.io/projected/c8904a89-2f5f-4bef-9f99-f191c064efb6-kube-api-access-l6gt4\") pod \"calico-node-l6k4r\" (UID: \"c8904a89-2f5f-4bef-9f99-f191c064efb6\") " pod="calico-system/calico-node-l6k4r" May 13 00:21:31.170849 kubelet[2751]: I0513 00:21:31.170789 2751 topology_manager.go:215] "Topology Admit Handler" podUID="89359810-8cb0-453a-816e-e1df193c8474" podNamespace="calico-system" podName="csi-node-driver-595b2" May 13 00:21:31.171098 kubelet[2751]: E0513 00:21:31.171069 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:31.241593 kubelet[2751]: E0513 00:21:31.241375 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.241593 kubelet[2751]: W0513 00:21:31.241493 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.241593 kubelet[2751]: E0513 00:21:31.241519 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.243731 kubelet[2751]: E0513 00:21:31.242820 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.243731 kubelet[2751]: W0513 00:21:31.242835 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.243731 kubelet[2751]: E0513 00:21:31.243423 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.244074 kubelet[2751]: E0513 00:21:31.243973 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.244074 kubelet[2751]: W0513 00:21:31.243986 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.244074 kubelet[2751]: E0513 00:21:31.243998 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.245408 kubelet[2751]: E0513 00:21:31.245293 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.245408 kubelet[2751]: W0513 00:21:31.245307 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.245408 kubelet[2751]: E0513 00:21:31.245319 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.249974 kubelet[2751]: E0513 00:21:31.249847 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.249974 kubelet[2751]: W0513 00:21:31.249881 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.249974 kubelet[2751]: E0513 00:21:31.249916 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.252974 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257481 kubelet[2751]: W0513 00:21:31.253002 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.253112 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.253427 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257481 kubelet[2751]: W0513 00:21:31.253439 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.253481 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.253782 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257481 kubelet[2751]: W0513 00:21:31.253792 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.253856 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257481 kubelet[2751]: E0513 00:21:31.254292 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257866 kubelet[2751]: W0513 00:21:31.254302 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.254316 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.254577 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257866 kubelet[2751]: W0513 00:21:31.254588 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.254599 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.254823 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257866 kubelet[2751]: W0513 00:21:31.254832 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.254847 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.257866 kubelet[2751]: E0513 00:21:31.255082 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.257866 kubelet[2751]: W0513 00:21:31.255092 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255120 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255372 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.258177 kubelet[2751]: W0513 00:21:31.255396 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255516 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255689 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.258177 kubelet[2751]: W0513 00:21:31.255700 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255715 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255971 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.258177 kubelet[2751]: W0513 00:21:31.255982 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.258177 kubelet[2751]: E0513 00:21:31.255993 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256228 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260650 kubelet[2751]: W0513 00:21:31.256251 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256261 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256511 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260650 kubelet[2751]: W0513 00:21:31.256520 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256537 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256749 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260650 kubelet[2751]: W0513 00:21:31.256758 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256767 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260650 kubelet[2751]: E0513 00:21:31.256965 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260969 kubelet[2751]: W0513 00:21:31.256974 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.256985 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.257192 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260969 kubelet[2751]: W0513 00:21:31.257201 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.257211 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.257471 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260969 kubelet[2751]: W0513 00:21:31.257481 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.257492 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.260969 kubelet[2751]: E0513 00:21:31.257694 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.260969 kubelet[2751]: W0513 00:21:31.257707 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.261274 kubelet[2751]: E0513 00:21:31.257746 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.261274 kubelet[2751]: E0513 00:21:31.258012 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.261274 kubelet[2751]: W0513 00:21:31.258021 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.261274 kubelet[2751]: E0513 00:21:31.258055 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.266450 kubelet[2751]: E0513 00:21:31.264042 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.266450 kubelet[2751]: W0513 00:21:31.264099 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.266450 kubelet[2751]: E0513 00:21:31.264130 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.266450 kubelet[2751]: E0513 00:21:31.264792 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.266450 kubelet[2751]: W0513 00:21:31.264819 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.266450 kubelet[2751]: E0513 00:21:31.265185 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.266450 kubelet[2751]: E0513 00:21:31.266456 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.266450 kubelet[2751]: W0513 00:21:31.266467 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.266994 kubelet[2751]: E0513 00:21:31.266530 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.267838 kubelet[2751]: E0513 00:21:31.267431 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.267838 kubelet[2751]: W0513 00:21:31.267451 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.268904 kubelet[2751]: E0513 00:21:31.268273 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.272122 kubelet[2751]: E0513 00:21:31.270515 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.272122 kubelet[2751]: W0513 00:21:31.270530 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.272122 kubelet[2751]: E0513 00:21:31.271476 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.272122 kubelet[2751]: E0513 00:21:31.271774 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.272122 kubelet[2751]: W0513 00:21:31.271785 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.272122 kubelet[2751]: E0513 00:21:31.271798 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.272896 kubelet[2751]: E0513 00:21:31.272542 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.272896 kubelet[2751]: W0513 00:21:31.272556 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.272896 kubelet[2751]: E0513 00:21:31.272566 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.274285 kubelet[2751]: E0513 00:21:31.274267 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.275567 containerd[1571]: time="2025-05-13T00:21:31.275381300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5574968c67-7pbcw,Uid:e6883d39-3219-409a-a957-b5a77382ee41,Namespace:calico-system,Attempt:0,}" May 13 00:21:31.306923 kubelet[2751]: E0513 00:21:31.306896 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.307400 containerd[1571]: time="2025-05-13T00:21:31.307347216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6k4r,Uid:c8904a89-2f5f-4bef-9f99-f191c064efb6,Namespace:calico-system,Attempt:0,}" May 13 00:21:31.340922 kubelet[2751]: E0513 00:21:31.340895 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.340922 kubelet[2751]: W0513 00:21:31.340912 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.340993 kubelet[2751]: E0513 00:21:31.340927 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.340993 kubelet[2751]: I0513 00:21:31.340956 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v2g\" (UniqueName: \"kubernetes.io/projected/89359810-8cb0-453a-816e-e1df193c8474-kube-api-access-x5v2g\") pod \"csi-node-driver-595b2\" (UID: \"89359810-8cb0-453a-816e-e1df193c8474\") " pod="calico-system/csi-node-driver-595b2" May 13 00:21:31.341305 kubelet[2751]: E0513 00:21:31.341262 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.341305 kubelet[2751]: W0513 00:21:31.341291 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.341459 kubelet[2751]: E0513 00:21:31.341329 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.341459 kubelet[2751]: I0513 00:21:31.341370 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89359810-8cb0-453a-816e-e1df193c8474-socket-dir\") pod \"csi-node-driver-595b2\" (UID: \"89359810-8cb0-453a-816e-e1df193c8474\") " pod="calico-system/csi-node-driver-595b2" May 13 00:21:31.341754 kubelet[2751]: E0513 00:21:31.341721 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.341754 kubelet[2751]: W0513 00:21:31.341747 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.341810 kubelet[2751]: E0513 00:21:31.341775 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.341996 kubelet[2751]: E0513 00:21:31.341976 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.341996 kubelet[2751]: W0513 00:21:31.341986 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.342050 kubelet[2751]: E0513 00:21:31.342000 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.342243 kubelet[2751]: E0513 00:21:31.342220 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.342243 kubelet[2751]: W0513 00:21:31.342240 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.342299 kubelet[2751]: E0513 00:21:31.342253 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.342299 kubelet[2751]: I0513 00:21:31.342284 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89359810-8cb0-453a-816e-e1df193c8474-registration-dir\") pod \"csi-node-driver-595b2\" (UID: \"89359810-8cb0-453a-816e-e1df193c8474\") " pod="calico-system/csi-node-driver-595b2" May 13 00:21:31.342528 kubelet[2751]: E0513 00:21:31.342513 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.342528 kubelet[2751]: W0513 00:21:31.342525 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.342587 kubelet[2751]: E0513 00:21:31.342540 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.342587 kubelet[2751]: I0513 00:21:31.342554 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89359810-8cb0-453a-816e-e1df193c8474-varrun\") pod \"csi-node-driver-595b2\" (UID: \"89359810-8cb0-453a-816e-e1df193c8474\") " pod="calico-system/csi-node-driver-595b2" May 13 00:21:31.342810 kubelet[2751]: E0513 00:21:31.342786 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.342810 kubelet[2751]: W0513 00:21:31.342802 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.342862 kubelet[2751]: E0513 00:21:31.342817 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.342999 kubelet[2751]: E0513 00:21:31.342986 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.342999 kubelet[2751]: W0513 00:21:31.342997 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.343050 kubelet[2751]: E0513 00:21:31.343009 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.343223 kubelet[2751]: E0513 00:21:31.343205 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.343223 kubelet[2751]: W0513 00:21:31.343220 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.343286 kubelet[2751]: E0513 00:21:31.343241 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.343444 kubelet[2751]: E0513 00:21:31.343431 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.343444 kubelet[2751]: W0513 00:21:31.343442 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.343493 kubelet[2751]: E0513 00:21:31.343454 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.343657 kubelet[2751]: E0513 00:21:31.343642 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.343657 kubelet[2751]: W0513 00:21:31.343653 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.343705 kubelet[2751]: E0513 00:21:31.343669 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.343705 kubelet[2751]: I0513 00:21:31.343685 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89359810-8cb0-453a-816e-e1df193c8474-kubelet-dir\") pod \"csi-node-driver-595b2\" (UID: \"89359810-8cb0-453a-816e-e1df193c8474\") " pod="calico-system/csi-node-driver-595b2" May 13 00:21:31.343907 kubelet[2751]: E0513 00:21:31.343894 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.343931 kubelet[2751]: W0513 00:21:31.343906 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.343931 kubelet[2751]: E0513 00:21:31.343922 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.344098 kubelet[2751]: E0513 00:21:31.344088 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.344119 kubelet[2751]: W0513 00:21:31.344097 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.344119 kubelet[2751]: E0513 00:21:31.344110 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.344319 kubelet[2751]: E0513 00:21:31.344309 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.344319 kubelet[2751]: W0513 00:21:31.344318 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.344363 kubelet[2751]: E0513 00:21:31.344325 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.344524 kubelet[2751]: E0513 00:21:31.344515 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.344547 kubelet[2751]: W0513 00:21:31.344523 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.344547 kubelet[2751]: E0513 00:21:31.344531 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.444763 kubelet[2751]: E0513 00:21:31.444731 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.444763 kubelet[2751]: W0513 00:21:31.444752 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.444763 kubelet[2751]: E0513 00:21:31.444766 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.445010 kubelet[2751]: E0513 00:21:31.444988 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.445010 kubelet[2751]: W0513 00:21:31.445000 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.445074 kubelet[2751]: E0513 00:21:31.445013 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.445312 kubelet[2751]: E0513 00:21:31.445285 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.445312 kubelet[2751]: W0513 00:21:31.445304 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.445360 kubelet[2751]: E0513 00:21:31.445326 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.445622 kubelet[2751]: E0513 00:21:31.445600 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.445622 kubelet[2751]: W0513 00:21:31.445612 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.445683 kubelet[2751]: E0513 00:21:31.445626 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.445835 kubelet[2751]: E0513 00:21:31.445817 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.445835 kubelet[2751]: W0513 00:21:31.445828 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.445884 kubelet[2751]: E0513 00:21:31.445841 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.446086 kubelet[2751]: E0513 00:21:31.446067 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.446086 kubelet[2751]: W0513 00:21:31.446082 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.446133 kubelet[2751]: E0513 00:21:31.446103 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.446352 kubelet[2751]: E0513 00:21:31.446337 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.446352 kubelet[2751]: W0513 00:21:31.446346 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.446416 kubelet[2751]: E0513 00:21:31.446375 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.446563 kubelet[2751]: E0513 00:21:31.446549 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.446563 kubelet[2751]: W0513 00:21:31.446558 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.446620 kubelet[2751]: E0513 00:21:31.446585 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.446789 kubelet[2751]: E0513 00:21:31.446774 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.446789 kubelet[2751]: W0513 00:21:31.446783 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.446833 kubelet[2751]: E0513 00:21:31.446797 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.447008 kubelet[2751]: E0513 00:21:31.446990 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.447008 kubelet[2751]: W0513 00:21:31.447002 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.447053 kubelet[2751]: E0513 00:21:31.447016 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.447215 kubelet[2751]: E0513 00:21:31.447198 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.447215 kubelet[2751]: W0513 00:21:31.447208 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.447275 kubelet[2751]: E0513 00:21:31.447222 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.447457 kubelet[2751]: E0513 00:21:31.447440 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.447457 kubelet[2751]: W0513 00:21:31.447452 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.447515 kubelet[2751]: E0513 00:21:31.447467 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.447702 kubelet[2751]: E0513 00:21:31.447687 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.447702 kubelet[2751]: W0513 00:21:31.447697 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.447746 kubelet[2751]: E0513 00:21:31.447723 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.447891 kubelet[2751]: E0513 00:21:31.447875 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.447891 kubelet[2751]: W0513 00:21:31.447885 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.447945 kubelet[2751]: E0513 00:21:31.447917 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.448124 kubelet[2751]: E0513 00:21:31.448102 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.448124 kubelet[2751]: W0513 00:21:31.448116 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.448240 kubelet[2751]: E0513 00:21:31.448151 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.448336 kubelet[2751]: E0513 00:21:31.448321 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.448336 kubelet[2751]: W0513 00:21:31.448332 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.448405 kubelet[2751]: E0513 00:21:31.448356 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.448570 kubelet[2751]: E0513 00:21:31.448555 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.448570 kubelet[2751]: W0513 00:21:31.448568 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.448617 kubelet[2751]: E0513 00:21:31.448584 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.448802 kubelet[2751]: E0513 00:21:31.448774 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.448802 kubelet[2751]: W0513 00:21:31.448789 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.448871 kubelet[2751]: E0513 00:21:31.448807 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.449028 kubelet[2751]: E0513 00:21:31.449010 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.449028 kubelet[2751]: W0513 00:21:31.449021 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.449107 kubelet[2751]: E0513 00:21:31.449035 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.449299 kubelet[2751]: E0513 00:21:31.449280 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.449299 kubelet[2751]: W0513 00:21:31.449292 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.449365 kubelet[2751]: E0513 00:21:31.449306 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.449514 kubelet[2751]: E0513 00:21:31.449500 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.449514 kubelet[2751]: W0513 00:21:31.449510 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.449557 kubelet[2751]: E0513 00:21:31.449522 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.449688 kubelet[2751]: E0513 00:21:31.449675 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.449688 kubelet[2751]: W0513 00:21:31.449685 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.449741 kubelet[2751]: E0513 00:21:31.449709 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.449886 kubelet[2751]: E0513 00:21:31.449872 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.449886 kubelet[2751]: W0513 00:21:31.449883 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.449939 kubelet[2751]: E0513 00:21:31.449891 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.450091 kubelet[2751]: E0513 00:21:31.450078 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.450091 kubelet[2751]: W0513 00:21:31.450088 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.450144 kubelet[2751]: E0513 00:21:31.450099 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.450312 kubelet[2751]: E0513 00:21:31.450297 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.450312 kubelet[2751]: W0513 00:21:31.450309 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.450357 kubelet[2751]: E0513 00:21:31.450318 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.549095 kubelet[2751]: E0513 00:21:31.548992 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.549095 kubelet[2751]: W0513 00:21:31.549008 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.549095 kubelet[2751]: E0513 00:21:31.549019 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.566833 kubelet[2751]: E0513 00:21:31.566804 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:21:31.566833 kubelet[2751]: W0513 00:21:31.566824 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:21:31.566833 kubelet[2751]: E0513 00:21:31.566844 2751 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:21:31.641338 containerd[1571]: time="2025-05-13T00:21:31.639376601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:31.641591 containerd[1571]: time="2025-05-13T00:21:31.641436120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:31.641591 containerd[1571]: time="2025-05-13T00:21:31.641497266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:31.641917 containerd[1571]: time="2025-05-13T00:21:31.641867935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:31.645122 containerd[1571]: time="2025-05-13T00:21:31.645035266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:31.645122 containerd[1571]: time="2025-05-13T00:21:31.645084118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:31.645197 containerd[1571]: time="2025-05-13T00:21:31.645096952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:31.645748 containerd[1571]: time="2025-05-13T00:21:31.645257244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:31.696159 containerd[1571]: time="2025-05-13T00:21:31.696092749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6k4r,Uid:c8904a89-2f5f-4bef-9f99-f191c064efb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\"" May 13 00:21:31.697472 kubelet[2751]: E0513 00:21:31.697053 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:31.699872 containerd[1571]: time="2025-05-13T00:21:31.699820658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5574968c67-7pbcw,Uid:e6883d39-3219-409a-a957-b5a77382ee41,Namespace:calico-system,Attempt:0,} returns sandbox id \"153d9493d0dfc8b6aeb9b679ab341597090c1cfd693ccc8a8db260231335f646\"" May 13 00:21:31.700251 containerd[1571]: time="2025-05-13T00:21:31.700216916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:21:31.700618 kubelet[2751]: E0513 00:21:31.700586 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:32.529194 kubelet[2751]: E0513 00:21:32.529158 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:33.274174 containerd[1571]: time="2025-05-13T00:21:33.274126400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:33.275190 containerd[1571]: time="2025-05-13T00:21:33.275148979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 00:21:33.276583 containerd[1571]: time="2025-05-13T00:21:33.276535826Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:33.278712 containerd[1571]: time="2025-05-13T00:21:33.278680402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:33.279267 containerd[1571]: time="2025-05-13T00:21:33.279237182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.578897975s" May 13 00:21:33.279312 containerd[1571]: time="2025-05-13T00:21:33.279265956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 00:21:33.280351 containerd[1571]: time="2025-05-13T00:21:33.280324183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:21:33.281360 containerd[1571]: time="2025-05-13T00:21:33.281329249Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:21:33.300636 containerd[1571]: time="2025-05-13T00:21:33.300588294Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e\"" May 13 00:21:33.301070 containerd[1571]: time="2025-05-13T00:21:33.301045847Z" level=info msg="StartContainer for \"d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e\"" May 13 00:21:33.362126 containerd[1571]: time="2025-05-13T00:21:33.362089003Z" level=info msg="StartContainer for \"d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e\" returns successfully" May 13 00:21:33.465671 containerd[1571]: time="2025-05-13T00:21:33.465586204Z" level=info msg="shim disconnected" id=d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e namespace=k8s.io May 13 00:21:33.465671 containerd[1571]: time="2025-05-13T00:21:33.465643232Z" level=warning msg="cleaning up after shim disconnected" id=d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e namespace=k8s.io May 13 00:21:33.465671 containerd[1571]: time="2025-05-13T00:21:33.465652579Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:21:33.579027 kubelet[2751]: E0513 00:21:33.578915 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:34.297005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d18a0b1fd1027dc16cf0dedfe21e9a115df2e721cc2e9830534d317ebb89977e-rootfs.mount: Deactivated successfully. May 13 00:21:34.528854 kubelet[2751]: E0513 00:21:34.528776 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:35.281160 containerd[1571]: time="2025-05-13T00:21:35.281100524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:35.281981 containerd[1571]: time="2025-05-13T00:21:35.281926562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 00:21:35.283306 containerd[1571]: time="2025-05-13T00:21:35.283262791Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:35.285447 containerd[1571]: time="2025-05-13T00:21:35.285410300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:35.285998 containerd[1571]: time="2025-05-13T00:21:35.285973542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.005618852s" May 13 00:21:35.286041 containerd[1571]: time="2025-05-13T00:21:35.286000262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 00:21:35.287609 containerd[1571]: time="2025-05-13T00:21:35.287434426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:21:35.293370 containerd[1571]: time="2025-05-13T00:21:35.293350329Z" level=info msg="CreateContainer within sandbox \"153d9493d0dfc8b6aeb9b679ab341597090c1cfd693ccc8a8db260231335f646\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:21:35.309570 containerd[1571]: time="2025-05-13T00:21:35.309521576Z" level=info msg="CreateContainer within sandbox \"153d9493d0dfc8b6aeb9b679ab341597090c1cfd693ccc8a8db260231335f646\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b01bce77a2ba042d93928bf35ecd37ddc27b88fb97cca8d9d59f6f5fd2c9a11e\"" May 13 00:21:35.310022 containerd[1571]: time="2025-05-13T00:21:35.309986413Z" level=info msg="StartContainer for \"b01bce77a2ba042d93928bf35ecd37ddc27b88fb97cca8d9d59f6f5fd2c9a11e\"" May 13 00:21:35.375957 containerd[1571]: time="2025-05-13T00:21:35.375909218Z" level=info msg="StartContainer for \"b01bce77a2ba042d93928bf35ecd37ddc27b88fb97cca8d9d59f6f5fd2c9a11e\" returns successfully" May 13 00:21:35.584034 kubelet[2751]: E0513 00:21:35.583820 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:35.594569 kubelet[2751]: I0513 00:21:35.594505 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5574968c67-7pbcw" podStartSLOduration=2.008628359 podStartE2EDuration="5.594481644s" podCreationTimestamp="2025-05-13 00:21:30 +0000 UTC" firstStartedPulling="2025-05-13 00:21:31.700963316 +0000 UTC m=+20.244425736" lastFinishedPulling="2025-05-13 00:21:35.286816611 +0000 UTC m=+23.830279021" observedRunningTime="2025-05-13 00:21:35.594417614 +0000 UTC m=+24.137880054" watchObservedRunningTime="2025-05-13 00:21:35.594481644 +0000 UTC m=+24.137944064" May 13 00:21:36.260750 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:41262.service - OpenSSH per-connection server daemon (10.0.0.1:41262). May 13 00:21:36.298900 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 41262 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:36.300521 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:36.304733 systemd-logind[1552]: New session 8 of user core. May 13 00:21:36.313640 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:21:36.427597 sshd[3433]: pam_unix(sshd:session): session closed for user core May 13 00:21:36.431471 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:41262.service: Deactivated successfully. May 13 00:21:36.434414 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:21:36.434566 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. May 13 00:21:36.435560 systemd-logind[1552]: Removed session 8. May 13 00:21:36.529818 kubelet[2751]: E0513 00:21:36.529705 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:36.586691 kubelet[2751]: I0513 00:21:36.586664 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:21:36.587242 kubelet[2751]: E0513 00:21:36.587219 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:38.534715 kubelet[2751]: E0513 00:21:38.534649 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:38.965241 containerd[1571]: time="2025-05-13T00:21:38.965190658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:38.966029 containerd[1571]: time="2025-05-13T00:21:38.965990074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 00:21:38.967235 containerd[1571]: time="2025-05-13T00:21:38.967206415Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:38.969372 containerd[1571]: time="2025-05-13T00:21:38.969323252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:38.969996 containerd[1571]: time="2025-05-13T00:21:38.969965182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.682503914s" May 13 00:21:38.970028 containerd[1571]: time="2025-05-13T00:21:38.969994837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 00:21:38.973454 containerd[1571]: time="2025-05-13T00:21:38.973431030Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:21:38.988056 containerd[1571]: time="2025-05-13T00:21:38.988016392Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4\"" May 13 00:21:38.988510 containerd[1571]: time="2025-05-13T00:21:38.988476259Z" level=info msg="StartContainer for \"5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4\"" May 13 00:21:39.046281 containerd[1571]: time="2025-05-13T00:21:39.046239801Z" level=info msg="StartContainer for \"5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4\" returns successfully" May 13 00:21:39.593551 kubelet[2751]: E0513 00:21:39.593522 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:40.529361 kubelet[2751]: E0513 00:21:40.529322 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:40.594966 kubelet[2751]: E0513 00:21:40.594939 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:41.157509 systemd-resolved[1463]: Under memory pressure, flushing caches. May 13 00:21:41.157564 systemd-resolved[1463]: Flushed all caches. May 13 00:21:41.159415 systemd-journald[1161]: Under memory pressure, flushing caches. May 13 00:21:41.436580 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:58506.service - OpenSSH per-connection server daemon (10.0.0.1:58506). May 13 00:21:41.540460 containerd[1571]: time="2025-05-13T00:21:41.540418374Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:21:41.544772 kubelet[2751]: I0513 00:21:41.544741 2751 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:21:41.546965 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 58506 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:41.549269 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:41.557202 systemd-logind[1552]: New session 9 of user core. May 13 00:21:41.564016 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:21:41.569123 kubelet[2751]: I0513 00:21:41.569075 2751 topology_manager.go:215] "Topology Admit Handler" podUID="804eca6f-5c7d-4af6-88ad-51bb90cf494a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8cj4n" May 13 00:21:41.576345 kubelet[2751]: I0513 00:21:41.576241 2751 topology_manager.go:215] "Topology Admit Handler" podUID="ada53867-29c0-47a2-9c8f-49c75067c4ab" podNamespace="calico-system" podName="calico-kube-controllers-58fc868db6-xgp24" May 13 00:21:41.576653 kubelet[2751]: I0513 00:21:41.576584 2751 topology_manager.go:215] "Topology Admit Handler" podUID="2b45f344-c84f-4551-94ba-2fb1ef195e11" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f2474" May 13 00:21:41.576756 kubelet[2751]: I0513 00:21:41.576744 2751 topology_manager.go:215] "Topology Admit Handler" podUID="c9d41bbe-3554-41cc-8544-18a0891b3173" podNamespace="calico-apiserver" podName="calico-apiserver-85bb956b5-b9csj" May 13 00:21:41.577038 kubelet[2751]: I0513 00:21:41.577001 2751 topology_manager.go:215] "Topology Admit Handler" podUID="6d0adc78-c5f4-4e3e-9481-9603992c8e2a" podNamespace="calico-apiserver" podName="calico-apiserver-85bb956b5-c5nvv" May 13 00:21:41.589292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4-rootfs.mount: Deactivated successfully. May 13 00:21:41.596631 containerd[1571]: time="2025-05-13T00:21:41.596556700Z" level=info msg="shim disconnected" id=5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4 namespace=k8s.io May 13 00:21:41.596631 containerd[1571]: time="2025-05-13T00:21:41.596617524Z" level=warning msg="cleaning up after shim disconnected" id=5fa4a77ba26ac788de531caf79aff4d23496d555a7665307c3fc4f29bd1860e4 namespace=k8s.io May 13 00:21:41.596631 containerd[1571]: time="2025-05-13T00:21:41.596626281Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:21:41.613296 containerd[1571]: time="2025-05-13T00:21:41.613235222Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:21:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:21:41.691594 sshd[3493]: pam_unix(sshd:session): session closed for user core May 13 00:21:41.695638 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:58506.service: Deactivated successfully. May 13 00:21:41.697870 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. May 13 00:21:41.697949 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:21:41.698879 systemd-logind[1552]: Removed session 9. May 13 00:21:41.721708 kubelet[2751]: I0513 00:21:41.721651 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada53867-29c0-47a2-9c8f-49c75067c4ab-tigera-ca-bundle\") pod \"calico-kube-controllers-58fc868db6-xgp24\" (UID: \"ada53867-29c0-47a2-9c8f-49c75067c4ab\") " pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" May 13 00:21:41.721708 kubelet[2751]: I0513 00:21:41.721691 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgslg\" (UniqueName: \"kubernetes.io/projected/c9d41bbe-3554-41cc-8544-18a0891b3173-kube-api-access-jgslg\") pod \"calico-apiserver-85bb956b5-b9csj\" (UID: \"c9d41bbe-3554-41cc-8544-18a0891b3173\") " pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" May 13 00:21:41.721708 kubelet[2751]: I0513 00:21:41.721710 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b45f344-c84f-4551-94ba-2fb1ef195e11-config-volume\") pod \"coredns-7db6d8ff4d-f2474\" (UID: \"2b45f344-c84f-4551-94ba-2fb1ef195e11\") " pod="kube-system/coredns-7db6d8ff4d-f2474" May 13 00:21:41.722355 kubelet[2751]: I0513 00:21:41.721731 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9d41bbe-3554-41cc-8544-18a0891b3173-calico-apiserver-certs\") pod \"calico-apiserver-85bb956b5-b9csj\" (UID: \"c9d41bbe-3554-41cc-8544-18a0891b3173\") " pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" May 13 00:21:41.722355 kubelet[2751]: I0513 00:21:41.721751 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6d0adc78-c5f4-4e3e-9481-9603992c8e2a-calico-apiserver-certs\") pod \"calico-apiserver-85bb956b5-c5nvv\" (UID: \"6d0adc78-c5f4-4e3e-9481-9603992c8e2a\") " pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" May 13 00:21:41.722355 kubelet[2751]: I0513 00:21:41.721765 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4p2z\" (UniqueName: \"kubernetes.io/projected/6d0adc78-c5f4-4e3e-9481-9603992c8e2a-kube-api-access-w4p2z\") pod \"calico-apiserver-85bb956b5-c5nvv\" (UID: \"6d0adc78-c5f4-4e3e-9481-9603992c8e2a\") " pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" May 13 00:21:41.722355 kubelet[2751]: I0513 00:21:41.721781 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/804eca6f-5c7d-4af6-88ad-51bb90cf494a-config-volume\") pod \"coredns-7db6d8ff4d-8cj4n\" (UID: \"804eca6f-5c7d-4af6-88ad-51bb90cf494a\") " pod="kube-system/coredns-7db6d8ff4d-8cj4n" May 13 00:21:41.722355 kubelet[2751]: I0513 00:21:41.721817 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgf5g\" (UniqueName: \"kubernetes.io/projected/ada53867-29c0-47a2-9c8f-49c75067c4ab-kube-api-access-rgf5g\") pod \"calico-kube-controllers-58fc868db6-xgp24\" (UID: \"ada53867-29c0-47a2-9c8f-49c75067c4ab\") " pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" May 13 00:21:41.722552 kubelet[2751]: I0513 00:21:41.721854 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrtq\" (UniqueName: \"kubernetes.io/projected/2b45f344-c84f-4551-94ba-2fb1ef195e11-kube-api-access-jhrtq\") pod \"coredns-7db6d8ff4d-f2474\" (UID: \"2b45f344-c84f-4551-94ba-2fb1ef195e11\") " pod="kube-system/coredns-7db6d8ff4d-f2474" May 13 00:21:41.722552 kubelet[2751]: I0513 00:21:41.721915 2751 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx655\" (UniqueName: \"kubernetes.io/projected/804eca6f-5c7d-4af6-88ad-51bb90cf494a-kube-api-access-xx655\") pod \"coredns-7db6d8ff4d-8cj4n\" (UID: \"804eca6f-5c7d-4af6-88ad-51bb90cf494a\") " pod="kube-system/coredns-7db6d8ff4d-8cj4n" May 13 00:21:41.875862 kubelet[2751]: E0513 00:21:41.875818 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:41.876512 containerd[1571]: time="2025-05-13T00:21:41.876466523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cj4n,Uid:804eca6f-5c7d-4af6-88ad-51bb90cf494a,Namespace:kube-system,Attempt:0,}" May 13 00:21:41.889592 containerd[1571]: time="2025-05-13T00:21:41.889147622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58fc868db6-xgp24,Uid:ada53867-29c0-47a2-9c8f-49c75067c4ab,Namespace:calico-system,Attempt:0,}" May 13 00:21:41.889592 containerd[1571]: time="2025-05-13T00:21:41.889358459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-b9csj,Uid:c9d41bbe-3554-41cc-8544-18a0891b3173,Namespace:calico-apiserver,Attempt:0,}" May 13 00:21:41.893150 kubelet[2751]: E0513 00:21:41.893121 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:41.893663 containerd[1571]: time="2025-05-13T00:21:41.893532275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f2474,Uid:2b45f344-c84f-4551-94ba-2fb1ef195e11,Namespace:kube-system,Attempt:0,}" May 13 00:21:41.894710 containerd[1571]: time="2025-05-13T00:21:41.894664476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-c5nvv,Uid:6d0adc78-c5f4-4e3e-9481-9603992c8e2a,Namespace:calico-apiserver,Attempt:0,}" May 13 00:21:41.963227 containerd[1571]: time="2025-05-13T00:21:41.963095468Z" level=error msg="Failed to destroy network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:41.963641 containerd[1571]: time="2025-05-13T00:21:41.963591582Z" level=error msg="encountered an error cleaning up failed sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:41.963694 containerd[1571]: time="2025-05-13T00:21:41.963650474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cj4n,Uid:804eca6f-5c7d-4af6-88ad-51bb90cf494a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:41.964089 kubelet[2751]: E0513 00:21:41.964028 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:41.964174 kubelet[2751]: E0513 00:21:41.964116 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cj4n" May 13 00:21:41.964174 kubelet[2751]: E0513 00:21:41.964142 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cj4n" May 13 00:21:41.964247 kubelet[2751]: E0513 00:21:41.964192 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cj4n_kube-system(804eca6f-5c7d-4af6-88ad-51bb90cf494a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cj4n_kube-system(804eca6f-5c7d-4af6-88ad-51bb90cf494a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cj4n" podUID="804eca6f-5c7d-4af6-88ad-51bb90cf494a" May 13 00:21:42.069447 containerd[1571]: time="2025-05-13T00:21:42.069355757Z" level=error msg="Failed to destroy network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.070584 containerd[1571]: time="2025-05-13T00:21:42.070551798Z" level=error msg="encountered an error cleaning up failed sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.070638 containerd[1571]: time="2025-05-13T00:21:42.070610237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-b9csj,Uid:c9d41bbe-3554-41cc-8544-18a0891b3173,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.071586 kubelet[2751]: E0513 00:21:42.070879 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.071586 kubelet[2751]: E0513 00:21:42.070942 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" May 13 00:21:42.071586 kubelet[2751]: E0513 00:21:42.070965 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" May 13 00:21:42.071771 kubelet[2751]: E0513 00:21:42.071012 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85bb956b5-b9csj_calico-apiserver(c9d41bbe-3554-41cc-8544-18a0891b3173)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85bb956b5-b9csj_calico-apiserver(c9d41bbe-3554-41cc-8544-18a0891b3173)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" podUID="c9d41bbe-3554-41cc-8544-18a0891b3173" May 13 00:21:42.076253 containerd[1571]: time="2025-05-13T00:21:42.076215406Z" level=error msg="Failed to destroy network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.076744 containerd[1571]: time="2025-05-13T00:21:42.076722730Z" level=error msg="encountered an error cleaning up failed sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.076860 containerd[1571]: time="2025-05-13T00:21:42.076841484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58fc868db6-xgp24,Uid:ada53867-29c0-47a2-9c8f-49c75067c4ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.077323 kubelet[2751]: E0513 00:21:42.077292 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.077605 kubelet[2751]: E0513 00:21:42.077584 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" May 13 00:21:42.077673 kubelet[2751]: E0513 00:21:42.077661 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" May 13 00:21:42.077774 kubelet[2751]: E0513 00:21:42.077750 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58fc868db6-xgp24_calico-system(ada53867-29c0-47a2-9c8f-49c75067c4ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58fc868db6-xgp24_calico-system(ada53867-29c0-47a2-9c8f-49c75067c4ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" podUID="ada53867-29c0-47a2-9c8f-49c75067c4ab" May 13 00:21:42.090434 containerd[1571]: time="2025-05-13T00:21:42.090346307Z" level=error msg="Failed to destroy network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.090814 containerd[1571]: time="2025-05-13T00:21:42.090789221Z" level=error msg="encountered an error cleaning up failed sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.090855 containerd[1571]: time="2025-05-13T00:21:42.090835207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-c5nvv,Uid:6d0adc78-c5f4-4e3e-9481-9603992c8e2a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.091152 kubelet[2751]: E0513 00:21:42.091099 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.091307 kubelet[2751]: E0513 00:21:42.091164 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" May 13 00:21:42.091307 kubelet[2751]: E0513 00:21:42.091202 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" May 13 00:21:42.091307 kubelet[2751]: E0513 00:21:42.091251 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85bb956b5-c5nvv_calico-apiserver(6d0adc78-c5f4-4e3e-9481-9603992c8e2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85bb956b5-c5nvv_calico-apiserver(6d0adc78-c5f4-4e3e-9481-9603992c8e2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" podUID="6d0adc78-c5f4-4e3e-9481-9603992c8e2a" May 13 00:21:42.098596 containerd[1571]: time="2025-05-13T00:21:42.098551638Z" level=error msg="Failed to destroy network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.098957 containerd[1571]: time="2025-05-13T00:21:42.098936302Z" level=error msg="encountered an error cleaning up failed sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.099010 containerd[1571]: time="2025-05-13T00:21:42.098979493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f2474,Uid:2b45f344-c84f-4551-94ba-2fb1ef195e11,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.099161 kubelet[2751]: E0513 00:21:42.099130 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.099266 kubelet[2751]: E0513 00:21:42.099170 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f2474" May 13 00:21:42.099266 kubelet[2751]: E0513 00:21:42.099194 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-f2474" May 13 00:21:42.099266 kubelet[2751]: E0513 00:21:42.099229 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-f2474_kube-system(2b45f344-c84f-4551-94ba-2fb1ef195e11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-f2474_kube-system(2b45f344-c84f-4551-94ba-2fb1ef195e11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f2474" podUID="2b45f344-c84f-4551-94ba-2fb1ef195e11" May 13 00:21:42.533164 containerd[1571]: time="2025-05-13T00:21:42.533082857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-595b2,Uid:89359810-8cb0-453a-816e-e1df193c8474,Namespace:calico-system,Attempt:0,}" May 13 00:21:42.588081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd-shm.mount: Deactivated successfully. May 13 00:21:42.599219 kubelet[2751]: I0513 00:21:42.599171 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:42.607212 containerd[1571]: time="2025-05-13T00:21:42.607163354Z" level=info msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" May 13 00:21:42.610583 kubelet[2751]: I0513 00:21:42.610498 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:42.611208 containerd[1571]: time="2025-05-13T00:21:42.611181465Z" level=info msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" May 13 00:21:42.612263 kubelet[2751]: I0513 00:21:42.612051 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:42.616448 containerd[1571]: time="2025-05-13T00:21:42.613928715Z" level=info msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" May 13 00:21:42.616448 containerd[1571]: time="2025-05-13T00:21:42.614117991Z" level=info msg="Ensure that sandbox 5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa in task-service has been cleanup successfully" May 13 00:21:42.616448 containerd[1571]: time="2025-05-13T00:21:42.616304967Z" level=info msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" May 13 00:21:42.616597 kubelet[2751]: I0513 00:21:42.615707 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:42.617699 containerd[1571]: time="2025-05-13T00:21:42.617669024Z" level=info msg="Ensure that sandbox 9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd in task-service has been cleanup successfully" May 13 00:21:42.618702 containerd[1571]: time="2025-05-13T00:21:42.618665038Z" level=info msg="Ensure that sandbox 5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4 in task-service has been cleanup successfully" May 13 00:21:42.621720 containerd[1571]: time="2025-05-13T00:21:42.621678068Z" level=info msg="Ensure that sandbox 26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc in task-service has been cleanup successfully" May 13 00:21:42.626526 kubelet[2751]: E0513 00:21:42.626493 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:42.629288 containerd[1571]: time="2025-05-13T00:21:42.629231232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:21:42.630301 kubelet[2751]: I0513 00:21:42.630270 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:42.631178 containerd[1571]: time="2025-05-13T00:21:42.631094428Z" level=info msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" May 13 00:21:42.631332 containerd[1571]: time="2025-05-13T00:21:42.631289655Z" level=info msg="Ensure that sandbox 5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d in task-service has been cleanup successfully" May 13 00:21:42.673331 containerd[1571]: time="2025-05-13T00:21:42.673279783Z" level=error msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" failed" error="failed to destroy network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.673553 kubelet[2751]: E0513 00:21:42.673510 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:42.673625 kubelet[2751]: E0513 00:21:42.673568 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa"} May 13 00:21:42.673659 kubelet[2751]: E0513 00:21:42.673625 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ada53867-29c0-47a2-9c8f-49c75067c4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:42.673659 kubelet[2751]: E0513 00:21:42.673647 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ada53867-29c0-47a2-9c8f-49c75067c4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" podUID="ada53867-29c0-47a2-9c8f-49c75067c4ab" May 13 00:21:42.673882 containerd[1571]: time="2025-05-13T00:21:42.673833745Z" level=error msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" failed" error="failed to destroy network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.673987 kubelet[2751]: E0513 00:21:42.673961 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:42.674059 kubelet[2751]: E0513 00:21:42.674045 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd"} May 13 00:21:42.674104 kubelet[2751]: E0513 00:21:42.674068 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"804eca6f-5c7d-4af6-88ad-51bb90cf494a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:42.674104 kubelet[2751]: E0513 00:21:42.674085 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"804eca6f-5c7d-4af6-88ad-51bb90cf494a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cj4n" podUID="804eca6f-5c7d-4af6-88ad-51bb90cf494a" May 13 00:21:42.676202 containerd[1571]: time="2025-05-13T00:21:42.676167958Z" level=error msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" failed" error="failed to destroy network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.676320 kubelet[2751]: E0513 00:21:42.676301 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:42.676446 kubelet[2751]: E0513 00:21:42.676330 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc"} May 13 00:21:42.676446 kubelet[2751]: E0513 00:21:42.676358 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9d41bbe-3554-41cc-8544-18a0891b3173\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:42.676446 kubelet[2751]: E0513 00:21:42.676381 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9d41bbe-3554-41cc-8544-18a0891b3173\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" podUID="c9d41bbe-3554-41cc-8544-18a0891b3173" May 13 00:21:42.677665 containerd[1571]: time="2025-05-13T00:21:42.677614411Z" level=error msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" failed" error="failed to destroy network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.677827 kubelet[2751]: E0513 00:21:42.677788 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:42.677890 kubelet[2751]: E0513 00:21:42.677833 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4"} May 13 00:21:42.677890 kubelet[2751]: E0513 00:21:42.677855 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d0adc78-c5f4-4e3e-9481-9603992c8e2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:42.677890 kubelet[2751]: E0513 00:21:42.677872 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d0adc78-c5f4-4e3e-9481-9603992c8e2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" podUID="6d0adc78-c5f4-4e3e-9481-9603992c8e2a" May 13 00:21:42.682235 containerd[1571]: time="2025-05-13T00:21:42.682188358Z" level=error msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" failed" error="failed to destroy network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:42.682375 kubelet[2751]: E0513 00:21:42.682335 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:42.682375 kubelet[2751]: E0513 00:21:42.682357 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d"} May 13 00:21:42.682472 kubelet[2751]: E0513 00:21:42.682377 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b45f344-c84f-4551-94ba-2fb1ef195e11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:42.682472 kubelet[2751]: E0513 00:21:42.682406 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b45f344-c84f-4551-94ba-2fb1ef195e11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-f2474" podUID="2b45f344-c84f-4551-94ba-2fb1ef195e11" May 13 00:21:43.205606 systemd-resolved[1463]: Under memory pressure, flushing caches. May 13 00:21:43.205631 systemd-resolved[1463]: Flushed all caches. May 13 00:21:43.207416 systemd-journald[1161]: Under memory pressure, flushing caches. May 13 00:21:43.222104 containerd[1571]: time="2025-05-13T00:21:43.222060576Z" level=error msg="Failed to destroy network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:43.222508 containerd[1571]: time="2025-05-13T00:21:43.222481758Z" level=error msg="encountered an error cleaning up failed sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:43.222586 containerd[1571]: time="2025-05-13T00:21:43.222528126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-595b2,Uid:89359810-8cb0-453a-816e-e1df193c8474,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:43.222804 kubelet[2751]: E0513 00:21:43.222747 2751 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:43.223189 kubelet[2751]: E0513 00:21:43.222807 2751 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-595b2" May 13 00:21:43.223189 kubelet[2751]: E0513 00:21:43.222827 2751 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-595b2" May 13 00:21:43.223189 kubelet[2751]: E0513 00:21:43.222870 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-595b2_calico-system(89359810-8cb0-453a-816e-e1df193c8474)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-595b2_calico-system(89359810-8cb0-453a-816e-e1df193c8474)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:43.224968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8-shm.mount: Deactivated successfully. May 13 00:21:43.633398 kubelet[2751]: I0513 00:21:43.633263 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:43.633915 containerd[1571]: time="2025-05-13T00:21:43.633841377Z" level=info msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" May 13 00:21:43.634408 containerd[1571]: time="2025-05-13T00:21:43.634022056Z" level=info msg="Ensure that sandbox e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8 in task-service has been cleanup successfully" May 13 00:21:43.661893 containerd[1571]: time="2025-05-13T00:21:43.661825677Z" level=error msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" failed" error="failed to destroy network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:21:43.662207 kubelet[2751]: E0513 00:21:43.662144 2751 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:43.662368 kubelet[2751]: E0513 00:21:43.662208 2751 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8"} May 13 00:21:43.662368 kubelet[2751]: E0513 00:21:43.662251 2751 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89359810-8cb0-453a-816e-e1df193c8474\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:21:43.662368 kubelet[2751]: E0513 00:21:43.662280 2751 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89359810-8cb0-453a-816e-e1df193c8474\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-595b2" podUID="89359810-8cb0-453a-816e-e1df193c8474" May 13 00:21:46.706594 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:58520.service - OpenSSH per-connection server daemon (10.0.0.1:58520). May 13 00:21:47.059254 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 58520 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:47.060958 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:47.065082 systemd-logind[1552]: New session 10 of user core. May 13 00:21:47.072708 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:21:47.199669 sshd[3910]: pam_unix(sshd:session): session closed for user core May 13 00:21:47.203201 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. May 13 00:21:47.203542 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:58520.service: Deactivated successfully. May 13 00:21:47.207191 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:21:47.208263 systemd-logind[1552]: Removed session 10. May 13 00:21:48.118691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761843131.mount: Deactivated successfully. May 13 00:21:49.693372 containerd[1571]: time="2025-05-13T00:21:49.693301818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.695651 containerd[1571]: time="2025-05-13T00:21:49.695618071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 00:21:49.743683 containerd[1571]: time="2025-05-13T00:21:49.743640034Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.847876 containerd[1571]: time="2025-05-13T00:21:49.847805522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:49.848533 containerd[1571]: time="2025-05-13T00:21:49.848502493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.219228821s" May 13 00:21:49.848595 containerd[1571]: time="2025-05-13T00:21:49.848536176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 00:21:49.856573 containerd[1571]: time="2025-05-13T00:21:49.856500065Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:21:49.874827 containerd[1571]: time="2025-05-13T00:21:49.874781837Z" level=info msg="CreateContainer within sandbox \"880854b43db3464395ed366e7a38d2d9406b1abdccbd330c972cb7791f2ce2e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c406851da1df645eab75bc4c7c92a1b0eb1e1e7524be4b19abbac98b6fd7132\"" May 13 00:21:49.875427 containerd[1571]: time="2025-05-13T00:21:49.875249106Z" level=info msg="StartContainer for \"0c406851da1df645eab75bc4c7c92a1b0eb1e1e7524be4b19abbac98b6fd7132\"" May 13 00:21:49.975375 containerd[1571]: time="2025-05-13T00:21:49.975267581Z" level=info msg="StartContainer for \"0c406851da1df645eab75bc4c7c92a1b0eb1e1e7524be4b19abbac98b6fd7132\" returns successfully" May 13 00:21:50.041177 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:21:50.041295 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:21:50.648995 kubelet[2751]: E0513 00:21:50.648957 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:51.651122 kubelet[2751]: E0513 00:21:51.651090 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:52.211603 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:43960.service - OpenSSH per-connection server daemon (10.0.0.1:43960). May 13 00:21:52.250690 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 43960 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:52.252899 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:52.256941 systemd-logind[1552]: New session 11 of user core. May 13 00:21:52.262704 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:21:52.392199 sshd[4164]: pam_unix(sshd:session): session closed for user core May 13 00:21:52.399638 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:43962.service - OpenSSH per-connection server daemon (10.0.0.1:43962). May 13 00:21:52.400258 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:43960.service: Deactivated successfully. May 13 00:21:52.403011 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. May 13 00:21:52.403787 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:21:52.404944 systemd-logind[1552]: Removed session 11. May 13 00:21:52.437233 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 43962 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:52.439096 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:52.449614 systemd-logind[1552]: New session 12 of user core. May 13 00:21:52.456891 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:21:52.592850 sshd[4186]: pam_unix(sshd:session): session closed for user core May 13 00:21:52.605900 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). May 13 00:21:52.606846 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:43962.service: Deactivated successfully. May 13 00:21:52.612281 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:21:52.614862 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. May 13 00:21:52.618264 systemd-logind[1552]: Removed session 12. May 13 00:21:52.646063 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:52.647741 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:52.652480 systemd-logind[1552]: New session 13 of user core. May 13 00:21:52.666715 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:21:52.780666 sshd[4213]: pam_unix(sshd:session): session closed for user core May 13 00:21:52.785633 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:43972.service: Deactivated successfully. May 13 00:21:52.788500 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:21:52.789107 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. May 13 00:21:52.789940 systemd-logind[1552]: Removed session 13. May 13 00:21:53.524410 kubelet[2751]: I0513 00:21:53.524348 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:21:53.526536 kubelet[2751]: E0513 00:21:53.525708 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:53.535572 kubelet[2751]: I0513 00:21:53.535506 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6k4r" podStartSLOduration=5.385778495 podStartE2EDuration="23.535487063s" podCreationTimestamp="2025-05-13 00:21:30 +0000 UTC" firstStartedPulling="2025-05-13 00:21:31.699505162 +0000 UTC m=+20.242967582" lastFinishedPulling="2025-05-13 00:21:49.84921373 +0000 UTC m=+38.392676150" observedRunningTime="2025-05-13 00:21:50.999190135 +0000 UTC m=+39.542652555" watchObservedRunningTime="2025-05-13 00:21:53.535487063 +0000 UTC m=+42.078949483" May 13 00:21:53.653834 kubelet[2751]: E0513 00:21:53.653799 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:54.529876 containerd[1571]: time="2025-05-13T00:21:54.529831183Z" level=info msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" May 13 00:21:54.530720 containerd[1571]: time="2025-05-13T00:21:54.530194225Z" level=info msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" May 13 00:21:54.608417 kernel: bpftool[4347]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.658 [INFO][4310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.659 [INFO][4310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" iface="eth0" netns="/var/run/netns/cni-c95ff43f-666b-3c25-da1e-0a9c73123db8" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.659 [INFO][4310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" iface="eth0" netns="/var/run/netns/cni-c95ff43f-666b-3c25-da1e-0a9c73123db8" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" iface="eth0" netns="/var/run/netns/cni-c95ff43f-666b-3c25-da1e-0a9c73123db8" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.726 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.726 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.726 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.732 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.732 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.733 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:54.742678 containerd[1571]: 2025-05-13 00:21:54.737 [INFO][4310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:21:54.742678 containerd[1571]: time="2025-05-13T00:21:54.741585681Z" level=info msg="TearDown network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" successfully" May 13 00:21:54.742678 containerd[1571]: time="2025-05-13T00:21:54.741613313Z" level=info msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" returns successfully" May 13 00:21:54.744061 containerd[1571]: time="2025-05-13T00:21:54.744020264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-b9csj,Uid:c9d41bbe-3554-41cc-8544-18a0891b3173,Namespace:calico-apiserver,Attempt:1,}" May 13 00:21:54.744497 systemd[1]: run-netns-cni\x2dc95ff43f\x2d666b\x2d3c25\x2dda1e\x2d0a9c73123db8.mount: Deactivated successfully. May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" iface="eth0" netns="/var/run/netns/cni-a887e2ac-4812-687c-348c-357aa3be8a81" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" iface="eth0" netns="/var/run/netns/cni-a887e2ac-4812-687c-348c-357aa3be8a81" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" iface="eth0" netns="/var/run/netns/cni-a887e2ac-4812-687c-348c-357aa3be8a81" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.660 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.727 [INFO][4369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.727 [INFO][4369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.733 [INFO][4369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.738 [WARNING][4369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.738 [INFO][4369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.739 [INFO][4369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:54.746588 containerd[1571]: 2025-05-13 00:21:54.742 [INFO][4309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:21:54.747107 containerd[1571]: time="2025-05-13T00:21:54.746795437Z" level=info msg="TearDown network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" successfully" May 13 00:21:54.747107 containerd[1571]: time="2025-05-13T00:21:54.746822167Z" level=info msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" returns successfully" May 13 00:21:54.747173 kubelet[2751]: E0513 00:21:54.747119 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:54.748279 containerd[1571]: time="2025-05-13T00:21:54.747850920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f2474,Uid:2b45f344-c84f-4551-94ba-2fb1ef195e11,Namespace:kube-system,Attempt:1,}" May 13 00:21:54.751139 systemd[1]: run-netns-cni\x2da887e2ac\x2d4812\x2d687c\x2d348c\x2d357aa3be8a81.mount: Deactivated successfully. May 13 00:21:55.227133 systemd-networkd[1247]: vxlan.calico: Link UP May 13 00:21:55.227147 systemd-networkd[1247]: vxlan.calico: Gained carrier May 13 00:21:55.423669 systemd-networkd[1247]: cali462595a3c04: Link UP May 13 00:21:55.424937 systemd-networkd[1247]: cali462595a3c04: Gained carrier May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.346 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--f2474-eth0 coredns-7db6d8ff4d- kube-system 2b45f344-c84f-4551-94ba-2fb1ef195e11 868 0 2025-05-13 00:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-f2474 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali462595a3c04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.347 [INFO][4435] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.380 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" HandleID="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.392 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" HandleID="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c2270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-f2474", "timestamp":"2025-05-13 00:21:55.380925733 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.392 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.392 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.392 [INFO][4458] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.393 [INFO][4458] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.398 [INFO][4458] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.402 [INFO][4458] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.403 [INFO][4458] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.405 [INFO][4458] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.405 [INFO][4458] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.406 [INFO][4458] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390 May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.409 [INFO][4458] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4458] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4458] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" host="localhost" May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:55.444024 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" HandleID="k8s-pod-network.c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.418 [INFO][4435] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f2474-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b45f344-c84f-4551-94ba-2fb1ef195e11", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-f2474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali462595a3c04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.419 [INFO][4435] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.419 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali462595a3c04 ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.424 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.426 [INFO][4435] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f2474-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b45f344-c84f-4551-94ba-2fb1ef195e11", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390", Pod:"coredns-7db6d8ff4d-f2474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali462595a3c04", MAC:"d6:af:ae:45:46:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.444764 containerd[1571]: 2025-05-13 00:21:55.439 [INFO][4435] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390" Namespace="kube-system" Pod="coredns-7db6d8ff4d-f2474" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:21:55.452566 systemd-networkd[1247]: cali48185b46e30: Link UP May 13 00:21:55.454034 systemd-networkd[1247]: cali48185b46e30: Gained carrier May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.345 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0 calico-apiserver-85bb956b5- calico-apiserver c9d41bbe-3554-41cc-8544-18a0891b3173 869 0 2025-05-13 00:21:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85bb956b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85bb956b5-b9csj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48185b46e30 [] []}} ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.345 [INFO][4430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.383 [INFO][4456] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" HandleID="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.393 [INFO][4456] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" HandleID="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002891c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85bb956b5-b9csj", "timestamp":"2025-05-13 00:21:55.383870635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.393 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.414 [INFO][4456] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.416 [INFO][4456] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.420 [INFO][4456] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.426 [INFO][4456] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.428 [INFO][4456] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.430 [INFO][4456] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.430 [INFO][4456] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.432 [INFO][4456] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.436 [INFO][4456] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.445 [INFO][4456] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.445 [INFO][4456] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" host="localhost" May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.445 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:55.468097 containerd[1571]: 2025-05-13 00:21:55.445 [INFO][4456] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" HandleID="k8s-pod-network.3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.450 [INFO][4430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9d41bbe-3554-41cc-8544-18a0891b3173", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85bb956b5-b9csj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48185b46e30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.450 [INFO][4430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.450 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48185b46e30 ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.453 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.453 [INFO][4430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9d41bbe-3554-41cc-8544-18a0891b3173", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa", Pod:"calico-apiserver-85bb956b5-b9csj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48185b46e30", MAC:"1e:b7:2e:59:15:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.468779 containerd[1571]: 2025-05-13 00:21:55.463 [INFO][4430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-b9csj" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:21:55.481033 containerd[1571]: time="2025-05-13T00:21:55.480632828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:55.481033 containerd[1571]: time="2025-05-13T00:21:55.480704734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:55.481033 containerd[1571]: time="2025-05-13T00:21:55.480727717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.481033 containerd[1571]: time="2025-05-13T00:21:55.480847972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.494562 containerd[1571]: time="2025-05-13T00:21:55.494267488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:55.494782 containerd[1571]: time="2025-05-13T00:21:55.494580436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:55.494782 containerd[1571]: time="2025-05-13T00:21:55.494629418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.494868 containerd[1571]: time="2025-05-13T00:21:55.494812813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.512942 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:55.520296 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:55.530918 containerd[1571]: time="2025-05-13T00:21:55.530629831Z" level=info msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" May 13 00:21:55.545894 containerd[1571]: time="2025-05-13T00:21:55.545684448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f2474,Uid:2b45f344-c84f-4551-94ba-2fb1ef195e11,Namespace:kube-system,Attempt:1,} returns sandbox id \"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390\"" May 13 00:21:55.546576 kubelet[2751]: E0513 00:21:55.546545 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:55.549005 containerd[1571]: time="2025-05-13T00:21:55.548870673Z" level=info msg="CreateContainer within sandbox \"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:21:55.566940 containerd[1571]: time="2025-05-13T00:21:55.566909606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-b9csj,Uid:c9d41bbe-3554-41cc-8544-18a0891b3173,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa\"" May 13 00:21:55.569023 containerd[1571]: time="2025-05-13T00:21:55.569006605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:21:55.582346 containerd[1571]: time="2025-05-13T00:21:55.582275238Z" level=info msg="CreateContainer within sandbox \"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb81049c2899c251efb775ed1caff915fab3ed399ed60662947f14433ca92fe2\"" May 13 00:21:55.584617 containerd[1571]: time="2025-05-13T00:21:55.584209712Z" level=info msg="StartContainer for \"bb81049c2899c251efb775ed1caff915fab3ed399ed60662947f14433ca92fe2\"" May 13 00:21:55.652163 containerd[1571]: time="2025-05-13T00:21:55.652122987Z" level=info msg="StartContainer for \"bb81049c2899c251efb775ed1caff915fab3ed399ed60662947f14433ca92fe2\" returns successfully" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" iface="eth0" netns="/var/run/netns/cni-8418e641-0de2-e2c2-1ea4-87d62658c100" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" iface="eth0" netns="/var/run/netns/cni-8418e641-0de2-e2c2-1ea4-87d62658c100" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" iface="eth0" netns="/var/run/netns/cni-8418e641-0de2-e2c2-1ea4-87d62658c100" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.604 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.638 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.639 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.639 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.646 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.647 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.648 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:55.655784 containerd[1571]: 2025-05-13 00:21:55.652 [INFO][4620] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:21:55.656518 containerd[1571]: time="2025-05-13T00:21:55.655930479Z" level=info msg="TearDown network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" successfully" May 13 00:21:55.656518 containerd[1571]: time="2025-05-13T00:21:55.655950266Z" level=info msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" returns successfully" May 13 00:21:55.656834 containerd[1571]: time="2025-05-13T00:21:55.656791636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-c5nvv,Uid:6d0adc78-c5f4-4e3e-9481-9603992c8e2a,Namespace:calico-apiserver,Attempt:1,}" May 13 00:21:55.661501 kubelet[2751]: E0513 00:21:55.661460 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:55.672639 kubelet[2751]: I0513 00:21:55.672115 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f2474" podStartSLOduration=30.672095804 podStartE2EDuration="30.672095804s" podCreationTimestamp="2025-05-13 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:55.671545419 +0000 UTC m=+44.215007839" watchObservedRunningTime="2025-05-13 00:21:55.672095804 +0000 UTC m=+44.215558224" May 13 00:21:55.747323 systemd[1]: run-netns-cni\x2d8418e641\x2d0de2\x2de2c2\x2d1ea4\x2d87d62658c100.mount: Deactivated successfully. May 13 00:21:55.789001 systemd-networkd[1247]: caliad5755bd0fe: Link UP May 13 00:21:55.789226 systemd-networkd[1247]: caliad5755bd0fe: Gained carrier May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.721 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0 calico-apiserver-85bb956b5- calico-apiserver 6d0adc78-c5f4-4e3e-9481-9603992c8e2a 883 0 2025-05-13 00:21:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85bb956b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-85bb956b5-c5nvv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad5755bd0fe [] []}} ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.721 [INFO][4690] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.751 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" HandleID="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.759 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" HandleID="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-85bb956b5-c5nvv", "timestamp":"2025-05-13 00:21:55.751862132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.760 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.760 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.760 [INFO][4703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.761 [INFO][4703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.764 [INFO][4703] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.767 [INFO][4703] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.768 [INFO][4703] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.770 [INFO][4703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.770 [INFO][4703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.771 [INFO][4703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2 May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.777 [INFO][4703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.782 [INFO][4703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.782 [INFO][4703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" host="localhost" May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.782 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:55.801276 containerd[1571]: 2025-05-13 00:21:55.782 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" HandleID="k8s-pod-network.62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.786 [INFO][4690] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d0adc78-c5f4-4e3e-9481-9603992c8e2a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-85bb956b5-c5nvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad5755bd0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.786 [INFO][4690] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.786 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad5755bd0fe ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.789 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.790 [INFO][4690] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d0adc78-c5f4-4e3e-9481-9603992c8e2a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2", Pod:"calico-apiserver-85bb956b5-c5nvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad5755bd0fe", MAC:"76:ba:50:8b:da:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:55.801944 containerd[1571]: 2025-05-13 00:21:55.798 [INFO][4690] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2" Namespace="calico-apiserver" Pod="calico-apiserver-85bb956b5-c5nvv" WorkloadEndpoint="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:21:55.834136 containerd[1571]: time="2025-05-13T00:21:55.833986862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:55.834136 containerd[1571]: time="2025-05-13T00:21:55.834097690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:55.834136 containerd[1571]: time="2025-05-13T00:21:55.834121114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.834471 containerd[1571]: time="2025-05-13T00:21:55.834231621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:55.853982 systemd[1]: run-containerd-runc-k8s.io-62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2-runc.HLkAnX.mount: Deactivated successfully. May 13 00:21:55.862994 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:55.893643 containerd[1571]: time="2025-05-13T00:21:55.893599715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85bb956b5-c5nvv,Uid:6d0adc78-c5f4-4e3e-9481-9603992c8e2a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2\"" May 13 00:21:56.519034 systemd-networkd[1247]: cali48185b46e30: Gained IPv6LL May 13 00:21:56.530148 containerd[1571]: time="2025-05-13T00:21:56.530114457Z" level=info msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" May 13 00:21:56.669628 kubelet[2751]: E0513 00:21:56.669589 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:57.157553 systemd-networkd[1247]: vxlan.calico: Gained IPv6LL May 13 00:21:57.413540 systemd-networkd[1247]: cali462595a3c04: Gained IPv6LL May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.731 [INFO][4787] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.732 [INFO][4787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" iface="eth0" netns="/var/run/netns/cni-f4e564db-d267-1166-27fd-9fa04cef2398" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.732 [INFO][4787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" iface="eth0" netns="/var/run/netns/cni-f4e564db-d267-1166-27fd-9fa04cef2398" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.732 [INFO][4787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" iface="eth0" netns="/var/run/netns/cni-f4e564db-d267-1166-27fd-9fa04cef2398" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.732 [INFO][4787] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.732 [INFO][4787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.768 [INFO][4795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.768 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:56.768 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:57.009 [WARNING][4795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:57.010 [INFO][4795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:57.420 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:57.424946 containerd[1571]: 2025-05-13 00:21:57.422 [INFO][4787] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:21:57.425618 containerd[1571]: time="2025-05-13T00:21:57.425126864Z" level=info msg="TearDown network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" successfully" May 13 00:21:57.425618 containerd[1571]: time="2025-05-13T00:21:57.425153023Z" level=info msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" returns successfully" May 13 00:21:57.425741 containerd[1571]: time="2025-05-13T00:21:57.425704508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58fc868db6-xgp24,Uid:ada53867-29c0-47a2-9c8f-49c75067c4ab,Namespace:calico-system,Attempt:1,}" May 13 00:21:57.428095 systemd[1]: run-netns-cni\x2df4e564db\x2dd267\x2d1166\x2d27fd\x2d9fa04cef2398.mount: Deactivated successfully. May 13 00:21:57.477524 systemd-networkd[1247]: caliad5755bd0fe: Gained IPv6LL May 13 00:21:57.529496 containerd[1571]: time="2025-05-13T00:21:57.529466807Z" level=info msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" May 13 00:21:57.529860 containerd[1571]: time="2025-05-13T00:21:57.529639952Z" level=info msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" iface="eth0" netns="/var/run/netns/cni-0b991083-aa0f-e198-be3d-e09900c25024" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" iface="eth0" netns="/var/run/netns/cni-0b991083-aa0f-e198-be3d-e09900c25024" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" iface="eth0" netns="/var/run/netns/cni-0b991083-aa0f-e198-be3d-e09900c25024" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.593 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.623 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.623 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.623 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.629 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.629 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.630 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:57.635278 containerd[1571]: 2025-05-13 00:21:57.633 [INFO][4837] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:21:57.637736 containerd[1571]: time="2025-05-13T00:21:57.635539726Z" level=info msg="TearDown network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" successfully" May 13 00:21:57.637736 containerd[1571]: time="2025-05-13T00:21:57.635584821Z" level=info msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" returns successfully" May 13 00:21:57.637736 containerd[1571]: time="2025-05-13T00:21:57.637142225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cj4n,Uid:804eca6f-5c7d-4af6-88ad-51bb90cf494a,Namespace:kube-system,Attempt:1,}" May 13 00:21:57.637815 kubelet[2751]: E0513 00:21:57.635935 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:57.640742 systemd[1]: run-netns-cni\x2d0b991083\x2daa0f\x2de198\x2dbe3d\x2de09900c25024.mount: Deactivated successfully. May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.585 [INFO][4839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.585 [INFO][4839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" iface="eth0" netns="/var/run/netns/cni-bc30d782-8e99-913a-409a-6a176c08f23d" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.585 [INFO][4839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" iface="eth0" netns="/var/run/netns/cni-bc30d782-8e99-913a-409a-6a176c08f23d" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.586 [INFO][4839] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" iface="eth0" netns="/var/run/netns/cni-bc30d782-8e99-913a-409a-6a176c08f23d" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.586 [INFO][4839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.586 [INFO][4839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.624 [INFO][4869] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.624 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.630 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.637 [WARNING][4869] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.637 [INFO][4869] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.642 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:57.648373 containerd[1571]: 2025-05-13 00:21:57.646 [INFO][4839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:21:57.649586 containerd[1571]: time="2025-05-13T00:21:57.648987581Z" level=info msg="TearDown network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" successfully" May 13 00:21:57.649586 containerd[1571]: time="2025-05-13T00:21:57.649024821Z" level=info msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" returns successfully" May 13 00:21:57.650157 containerd[1571]: time="2025-05-13T00:21:57.650133483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-595b2,Uid:89359810-8cb0-453a-816e-e1df193c8474,Namespace:calico-system,Attempt:1,}" May 13 00:21:57.651887 systemd[1]: run-netns-cni\x2dbc30d782\x2d8e99\x2d913a\x2d409a\x2d6a176c08f23d.mount: Deactivated successfully. May 13 00:21:57.677133 systemd-networkd[1247]: cali78aa4161ae6: Link UP May 13 00:21:57.677351 systemd-networkd[1247]: cali78aa4161ae6: Gained carrier May 13 00:21:57.680798 kubelet[2751]: E0513 00:21:57.679824 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.590 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0 calico-kube-controllers-58fc868db6- calico-system ada53867-29c0-47a2-9c8f-49c75067c4ab 898 0 2025-05-13 00:21:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58fc868db6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58fc868db6-xgp24 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78aa4161ae6 [] []}} ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.590 [INFO][4850] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.628 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" HandleID="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.638 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" HandleID="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003081e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58fc868db6-xgp24", "timestamp":"2025-05-13 00:21:57.628084204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.638 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.642 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.642 [INFO][4882] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.644 [INFO][4882] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.648 [INFO][4882] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.652 [INFO][4882] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.654 [INFO][4882] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.656 [INFO][4882] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.656 [INFO][4882] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.657 [INFO][4882] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.662 [INFO][4882] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.667 [INFO][4882] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.667 [INFO][4882] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" host="localhost" May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.667 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:57.706695 containerd[1571]: 2025-05-13 00:21:57.667 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" HandleID="k8s-pod-network.4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.671 [INFO][4850] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0", GenerateName:"calico-kube-controllers-58fc868db6-", Namespace:"calico-system", SelfLink:"", UID:"ada53867-29c0-47a2-9c8f-49c75067c4ab", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58fc868db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58fc868db6-xgp24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78aa4161ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.671 [INFO][4850] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.672 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78aa4161ae6 ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.677 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.677 [INFO][4850] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0", GenerateName:"calico-kube-controllers-58fc868db6-", Namespace:"calico-system", SelfLink:"", UID:"ada53867-29c0-47a2-9c8f-49c75067c4ab", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58fc868db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a", Pod:"calico-kube-controllers-58fc868db6-xgp24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78aa4161ae6", MAC:"86:90:3c:84:6f:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:57.707252 containerd[1571]: 2025-05-13 00:21:57.695 [INFO][4850] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a" Namespace="calico-system" Pod="calico-kube-controllers-58fc868db6-xgp24" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:21:57.742788 containerd[1571]: time="2025-05-13T00:21:57.742674368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:57.743380 containerd[1571]: time="2025-05-13T00:21:57.743256682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:57.745402 containerd[1571]: time="2025-05-13T00:21:57.745040953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:57.745402 containerd[1571]: time="2025-05-13T00:21:57.745249283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:57.774096 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:57.790759 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). May 13 00:21:57.803761 containerd[1571]: time="2025-05-13T00:21:57.803723099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58fc868db6-xgp24,Uid:ada53867-29c0-47a2-9c8f-49c75067c4ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a\"" May 13 00:21:57.829280 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:21:57.831025 sshd[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:57.835158 systemd-logind[1552]: New session 14 of user core. May 13 00:21:57.840740 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:21:57.903081 systemd-networkd[1247]: cali23b65c710be: Link UP May 13 00:21:57.903328 systemd-networkd[1247]: cali23b65c710be: Gained carrier May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.707 [INFO][4894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0 coredns-7db6d8ff4d- kube-system 804eca6f-5c7d-4af6-88ad-51bb90cf494a 909 0 2025-05-13 00:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8cj4n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali23b65c710be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.708 [INFO][4894] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.739 [INFO][4942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" HandleID="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.751 [INFO][4942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" HandleID="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d690), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8cj4n", "timestamp":"2025-05-13 00:21:57.739360775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.751 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.751 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.751 [INFO][4942] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.756 [INFO][4942] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.762 [INFO][4942] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.767 [INFO][4942] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.769 [INFO][4942] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.771 [INFO][4942] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.771 [INFO][4942] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.773 [INFO][4942] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.837 [INFO][4942] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4942] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4942] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" host="localhost" May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:57.923822 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" HandleID="k8s-pod-network.76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.893 [INFO][4894] cni-plugin/k8s.go 386: Populated endpoint ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"804eca6f-5c7d-4af6-88ad-51bb90cf494a", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8cj4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23b65c710be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.893 [INFO][4894] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.894 [INFO][4894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23b65c710be ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.898 [INFO][4894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.898 [INFO][4894] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"804eca6f-5c7d-4af6-88ad-51bb90cf494a", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd", Pod:"coredns-7db6d8ff4d-8cj4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23b65c710be", MAC:"a2:af:b3:49:c8:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:57.924853 containerd[1571]: 2025-05-13 00:21:57.915 [INFO][4894] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cj4n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:21:57.978076 containerd[1571]: time="2025-05-13T00:21:57.977770687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:57.978076 containerd[1571]: time="2025-05-13T00:21:57.977848583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:57.978076 containerd[1571]: time="2025-05-13T00:21:57.977881545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:57.978076 containerd[1571]: time="2025-05-13T00:21:57.978061282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:58.009839 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:58.011797 systemd-networkd[1247]: calia10c108eea6: Link UP May 13 00:21:58.012968 systemd-networkd[1247]: calia10c108eea6: Gained carrier May 13 00:21:58.035324 containerd[1571]: time="2025-05-13T00:21:58.035279466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cj4n,Uid:804eca6f-5c7d-4af6-88ad-51bb90cf494a,Namespace:kube-system,Attempt:1,} returns sandbox id \"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd\"" May 13 00:21:58.036003 kubelet[2751]: E0513 00:21:58.035987 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:58.038285 containerd[1571]: time="2025-05-13T00:21:58.038249363Z" level=info msg="CreateContainer within sandbox \"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.712 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--595b2-eth0 csi-node-driver- calico-system 89359810-8cb0-453a-816e-e1df193c8474 908 0 2025-05-13 00:21:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-595b2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia10c108eea6 [] []}} ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.712 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.751 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" HandleID="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.759 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" HandleID="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Workload="localhost-k8s-csi--node--driver--595b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-595b2", "timestamp":"2025-05-13 00:21:57.751272074 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.759 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.889 [INFO][4948] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.899 [INFO][4948] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.915 [INFO][4948] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.939 [INFO][4948] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.951 [INFO][4948] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.964 [INFO][4948] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.964 [INFO][4948] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.970 [INFO][4948] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492 May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:57.992 [INFO][4948] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:58.002 [INFO][4948] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:58.002 [INFO][4948] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" host="localhost" May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:58.002 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:21:58.090217 containerd[1571]: 2025-05-13 00:21:58.002 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" HandleID="k8s-pod-network.1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.008 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--595b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89359810-8cb0-453a-816e-e1df193c8474", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-595b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia10c108eea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.008 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.008 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia10c108eea6 ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.013 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.014 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--595b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89359810-8cb0-453a-816e-e1df193c8474", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492", Pod:"csi-node-driver-595b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia10c108eea6", MAC:"96:8a:81:0f:29:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:21:58.091037 containerd[1571]: 2025-05-13 00:21:58.087 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492" Namespace="calico-system" Pod="csi-node-driver-595b2" WorkloadEndpoint="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:21:58.612877 sshd[4991]: pam_unix(sshd:session): session closed for user core May 13 00:21:58.616970 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:43988.service: Deactivated successfully. May 13 00:21:58.619422 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. May 13 00:21:58.619663 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:21:58.620674 systemd-logind[1552]: Removed session 14. May 13 00:21:58.835865 containerd[1571]: time="2025-05-13T00:21:58.835757443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:21:58.835865 containerd[1571]: time="2025-05-13T00:21:58.835836241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:21:58.835865 containerd[1571]: time="2025-05-13T00:21:58.835848364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:58.836321 containerd[1571]: time="2025-05-13T00:21:58.835974492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:21:58.859822 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:21:58.873880 containerd[1571]: time="2025-05-13T00:21:58.873777321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-595b2,Uid:89359810-8cb0-453a-816e-e1df193c8474,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492\"" May 13 00:21:58.921441 containerd[1571]: time="2025-05-13T00:21:58.921374072Z" level=info msg="CreateContainer within sandbox \"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef3b49a2231f941ed8381a366cff1120e07d35f4d0e672058887c3be79a0e68b\"" May 13 00:21:58.924711 containerd[1571]: time="2025-05-13T00:21:58.923935121Z" level=info msg="StartContainer for \"ef3b49a2231f941ed8381a366cff1120e07d35f4d0e672058887c3be79a0e68b\"" May 13 00:21:58.991417 containerd[1571]: time="2025-05-13T00:21:58.991364576Z" level=info msg="StartContainer for \"ef3b49a2231f941ed8381a366cff1120e07d35f4d0e672058887c3be79a0e68b\" returns successfully" May 13 00:21:59.141529 systemd-resolved[1463]: Under memory pressure, flushing caches. May 13 00:21:59.141561 systemd-resolved[1463]: Flushed all caches. May 13 00:21:59.143420 systemd-journald[1161]: Under memory pressure, flushing caches. May 13 00:21:59.333587 systemd-networkd[1247]: cali78aa4161ae6: Gained IPv6LL May 13 00:21:59.428836 containerd[1571]: time="2025-05-13T00:21:59.428759888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.429754 containerd[1571]: time="2025-05-13T00:21:59.429693572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 00:21:59.440473 containerd[1571]: time="2025-05-13T00:21:59.440432396Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.475342 containerd[1571]: time="2025-05-13T00:21:59.475248072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.476057 containerd[1571]: time="2025-05-13T00:21:59.475991938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.906908549s" May 13 00:21:59.476057 containerd[1571]: time="2025-05-13T00:21:59.476047433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:21:59.477523 containerd[1571]: time="2025-05-13T00:21:59.477473139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:21:59.479786 containerd[1571]: time="2025-05-13T00:21:59.479681847Z" level=info msg="CreateContainer within sandbox \"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:21:59.495074 containerd[1571]: time="2025-05-13T00:21:59.495031650Z" level=info msg="CreateContainer within sandbox \"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce8beebff5f58a10eefeb8fc3fdb35ae28d14670d7560ceaaeb41bc47a85f4ca\"" May 13 00:21:59.495862 containerd[1571]: time="2025-05-13T00:21:59.495836291Z" level=info msg="StartContainer for \"ce8beebff5f58a10eefeb8fc3fdb35ae28d14670d7560ceaaeb41bc47a85f4ca\"" May 13 00:21:59.565602 containerd[1571]: time="2025-05-13T00:21:59.565553953Z" level=info msg="StartContainer for \"ce8beebff5f58a10eefeb8fc3fdb35ae28d14670d7560ceaaeb41bc47a85f4ca\" returns successfully" May 13 00:21:59.691446 kubelet[2751]: E0513 00:21:59.690664 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:21:59.710211 kubelet[2751]: I0513 00:21:59.710153 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85bb956b5-b9csj" podStartSLOduration=25.801043102 podStartE2EDuration="29.71013488s" podCreationTimestamp="2025-05-13 00:21:30 +0000 UTC" firstStartedPulling="2025-05-13 00:21:55.568170866 +0000 UTC m=+44.111633286" lastFinishedPulling="2025-05-13 00:21:59.477262644 +0000 UTC m=+48.020725064" observedRunningTime="2025-05-13 00:21:59.7011875 +0000 UTC m=+48.244649920" watchObservedRunningTime="2025-05-13 00:21:59.71013488 +0000 UTC m=+48.253597300" May 13 00:21:59.710414 kubelet[2751]: I0513 00:21:59.710304 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8cj4n" podStartSLOduration=34.710300201 podStartE2EDuration="34.710300201s" podCreationTimestamp="2025-05-13 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:21:59.70991608 +0000 UTC m=+48.253378500" watchObservedRunningTime="2025-05-13 00:21:59.710300201 +0000 UTC m=+48.253762621" May 13 00:21:59.783086 systemd-networkd[1247]: cali23b65c710be: Gained IPv6LL May 13 00:21:59.783956 systemd-networkd[1247]: calia10c108eea6: Gained IPv6LL May 13 00:22:00.691789 kubelet[2751]: I0513 00:22:00.691756 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:00.692351 kubelet[2751]: E0513 00:22:00.692291 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:00.931808 containerd[1571]: time="2025-05-13T00:22:00.931743702Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:00.978300 containerd[1571]: time="2025-05-13T00:22:00.978150086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 00:22:00.980185 containerd[1571]: time="2025-05-13T00:22:00.980157143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.502641614s" May 13 00:22:00.980250 containerd[1571]: time="2025-05-13T00:22:00.980189724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:22:00.980988 containerd[1571]: time="2025-05-13T00:22:00.980961684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:22:00.982033 containerd[1571]: time="2025-05-13T00:22:00.982007157Z" level=info msg="CreateContainer within sandbox \"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:22:01.189511 systemd-resolved[1463]: Under memory pressure, flushing caches. May 13 00:22:01.189536 systemd-resolved[1463]: Flushed all caches. May 13 00:22:01.191413 systemd-journald[1161]: Under memory pressure, flushing caches. May 13 00:22:01.283013 containerd[1571]: time="2025-05-13T00:22:01.282838061Z" level=info msg="CreateContainer within sandbox \"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7d4b99474406fd855852e4d6466704a6ccf462ed1ddedca1e41de8bb8e516b10\"" May 13 00:22:01.284012 containerd[1571]: time="2025-05-13T00:22:01.283978593Z" level=info msg="StartContainer for \"7d4b99474406fd855852e4d6466704a6ccf462ed1ddedca1e41de8bb8e516b10\"" May 13 00:22:01.365686 containerd[1571]: time="2025-05-13T00:22:01.365645393Z" level=info msg="StartContainer for \"7d4b99474406fd855852e4d6466704a6ccf462ed1ddedca1e41de8bb8e516b10\" returns successfully" May 13 00:22:01.698994 kubelet[2751]: E0513 00:22:01.697232 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:02.698412 kubelet[2751]: I0513 00:22:02.698357 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:03.342784 containerd[1571]: time="2025-05-13T00:22:03.342729483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:03.343726 containerd[1571]: time="2025-05-13T00:22:03.343680970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 00:22:03.344906 containerd[1571]: time="2025-05-13T00:22:03.344845646Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:03.347123 containerd[1571]: time="2025-05-13T00:22:03.347080091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:03.347690 containerd[1571]: time="2025-05-13T00:22:03.347663917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.366669601s" May 13 00:22:03.347747 containerd[1571]: time="2025-05-13T00:22:03.347691949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 00:22:03.349288 containerd[1571]: time="2025-05-13T00:22:03.349113197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:22:03.357098 containerd[1571]: time="2025-05-13T00:22:03.356998721Z" level=info msg="CreateContainer within sandbox \"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:22:03.630791 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:50466.service - OpenSSH per-connection server daemon (10.0.0.1:50466). May 13 00:22:03.682442 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 50466 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:03.684167 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:03.690258 systemd-logind[1552]: New session 15 of user core. May 13 00:22:03.697653 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:22:03.700327 containerd[1571]: time="2025-05-13T00:22:03.700296490Z" level=info msg="CreateContainer within sandbox \"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f2daa069f392a91072fb521e7b38f2ecb6f408eb3edfa03814069013b1816d98\"" May 13 00:22:03.700760 containerd[1571]: time="2025-05-13T00:22:03.700741565Z" level=info msg="StartContainer for \"f2daa069f392a91072fb521e7b38f2ecb6f408eb3edfa03814069013b1816d98\"" May 13 00:22:04.203788 sshd[5271]: pam_unix(sshd:session): session closed for user core May 13 00:22:04.207624 containerd[1571]: time="2025-05-13T00:22:04.207215924Z" level=info msg="StartContainer for \"f2daa069f392a91072fb521e7b38f2ecb6f408eb3edfa03814069013b1816d98\" returns successfully" May 13 00:22:04.208410 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. May 13 00:22:04.210210 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:50466.service: Deactivated successfully. May 13 00:22:04.212455 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:22:04.213435 systemd-logind[1552]: Removed session 15. May 13 00:22:04.754853 kubelet[2751]: I0513 00:22:04.754613 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85bb956b5-c5nvv" podStartSLOduration=29.669363141 podStartE2EDuration="34.75459514s" podCreationTimestamp="2025-05-13 00:21:30 +0000 UTC" firstStartedPulling="2025-05-13 00:21:55.895606855 +0000 UTC m=+44.439069275" lastFinishedPulling="2025-05-13 00:22:00.980838854 +0000 UTC m=+49.524301274" observedRunningTime="2025-05-13 00:22:01.758288302 +0000 UTC m=+50.301750722" watchObservedRunningTime="2025-05-13 00:22:04.75459514 +0000 UTC m=+53.298057560" May 13 00:22:04.755673 kubelet[2751]: I0513 00:22:04.755027 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58fc868db6-xgp24" podStartSLOduration=28.211624274 podStartE2EDuration="33.755019706s" podCreationTimestamp="2025-05-13 00:21:31 +0000 UTC" firstStartedPulling="2025-05-13 00:21:57.805400169 +0000 UTC m=+46.348862579" lastFinishedPulling="2025-05-13 00:22:03.348795591 +0000 UTC m=+51.892258011" observedRunningTime="2025-05-13 00:22:04.754343446 +0000 UTC m=+53.297805876" watchObservedRunningTime="2025-05-13 00:22:04.755019706 +0000 UTC m=+53.298482136" May 13 00:22:06.020072 containerd[1571]: time="2025-05-13T00:22:06.020019706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:06.020915 containerd[1571]: time="2025-05-13T00:22:06.020877967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 00:22:06.022529 containerd[1571]: time="2025-05-13T00:22:06.022502586Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:06.024787 containerd[1571]: time="2025-05-13T00:22:06.024726521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:06.025323 containerd[1571]: time="2025-05-13T00:22:06.025291612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.676152867s" May 13 00:22:06.025323 containerd[1571]: time="2025-05-13T00:22:06.025320005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 00:22:06.027230 containerd[1571]: time="2025-05-13T00:22:06.027196117Z" level=info msg="CreateContainer within sandbox \"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:22:06.042759 containerd[1571]: time="2025-05-13T00:22:06.042714655Z" level=info msg="CreateContainer within sandbox \"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca370dbcba4fc891f6e60416d23d3462b045ef35ec20890842b221b60e523b23\"" May 13 00:22:06.043186 containerd[1571]: time="2025-05-13T00:22:06.043150082Z" level=info msg="StartContainer for \"ca370dbcba4fc891f6e60416d23d3462b045ef35ec20890842b221b60e523b23\"" May 13 00:22:06.101783 containerd[1571]: time="2025-05-13T00:22:06.101744504Z" level=info msg="StartContainer for \"ca370dbcba4fc891f6e60416d23d3462b045ef35ec20890842b221b60e523b23\" returns successfully" May 13 00:22:06.103051 containerd[1571]: time="2025-05-13T00:22:06.103020680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:22:08.082703 containerd[1571]: time="2025-05-13T00:22:08.082634043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.098095 containerd[1571]: time="2025-05-13T00:22:08.097605883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 00:22:08.141988 containerd[1571]: time="2025-05-13T00:22:08.141904775Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.285425 containerd[1571]: time="2025-05-13T00:22:08.285326437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:08.286165 containerd[1571]: time="2025-05-13T00:22:08.286130746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.183076794s" May 13 00:22:08.286211 containerd[1571]: time="2025-05-13T00:22:08.286165431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 00:22:08.288322 containerd[1571]: time="2025-05-13T00:22:08.288292063Z" level=info msg="CreateContainer within sandbox \"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:22:08.321805 containerd[1571]: time="2025-05-13T00:22:08.321752684Z" level=info msg="CreateContainer within sandbox \"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d8c150d47142fcc0213e0766d2e5f638e329dd56252cf5d7a1388d5395b73cec\"" May 13 00:22:08.322573 containerd[1571]: time="2025-05-13T00:22:08.322543578Z" level=info msg="StartContainer for \"d8c150d47142fcc0213e0766d2e5f638e329dd56252cf5d7a1388d5395b73cec\"" May 13 00:22:08.396949 containerd[1571]: time="2025-05-13T00:22:08.396898872Z" level=info msg="StartContainer for \"d8c150d47142fcc0213e0766d2e5f638e329dd56252cf5d7a1388d5395b73cec\" returns successfully" May 13 00:22:08.604381 kubelet[2751]: I0513 00:22:08.604325 2751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:22:08.604381 kubelet[2751]: I0513 00:22:08.604363 2751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:22:08.726908 kubelet[2751]: I0513 00:22:08.726710 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-595b2" podStartSLOduration=28.315149594 podStartE2EDuration="37.726691665s" podCreationTimestamp="2025-05-13 00:21:31 +0000 UTC" firstStartedPulling="2025-05-13 00:21:58.875493213 +0000 UTC m=+47.418955633" lastFinishedPulling="2025-05-13 00:22:08.287035284 +0000 UTC m=+56.830497704" observedRunningTime="2025-05-13 00:22:08.726589844 +0000 UTC m=+57.270052264" watchObservedRunningTime="2025-05-13 00:22:08.726691665 +0000 UTC m=+57.270154085" May 13 00:22:09.214592 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). May 13 00:22:09.257678 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:09.259263 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:09.263290 systemd-logind[1552]: New session 16 of user core. May 13 00:22:09.271748 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:22:09.398708 sshd[5444]: pam_unix(sshd:session): session closed for user core May 13 00:22:09.402583 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:56332.service: Deactivated successfully. May 13 00:22:09.405132 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:22:09.405266 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. May 13 00:22:09.406240 systemd-logind[1552]: Removed session 16. May 13 00:22:11.521511 containerd[1571]: time="2025-05-13T00:22:11.521475825Z" level=info msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.552 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"804eca6f-5c7d-4af6-88ad-51bb90cf494a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd", Pod:"coredns-7db6d8ff4d-8cj4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23b65c710be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.552 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.552 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" iface="eth0" netns="" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.552 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.552 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.573 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.573 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.573 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.578 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.578 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.579 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:11.585198 containerd[1571]: 2025-05-13 00:22:11.582 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.585746 containerd[1571]: time="2025-05-13T00:22:11.585223437Z" level=info msg="TearDown network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" successfully" May 13 00:22:11.585746 containerd[1571]: time="2025-05-13T00:22:11.585242452Z" level=info msg="StopPodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" returns successfully" May 13 00:22:11.591641 containerd[1571]: time="2025-05-13T00:22:11.591612958Z" level=info msg="RemovePodSandbox for \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" May 13 00:22:11.593689 containerd[1571]: time="2025-05-13T00:22:11.593666141Z" level=info msg="Forcibly stopping sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\"" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.624 [WARNING][5508] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"804eca6f-5c7d-4af6-88ad-51bb90cf494a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76d29f358ad325303d17e08c252d09e7206f248021de1a2e13f0fe2a03339cfd", Pod:"coredns-7db6d8ff4d-8cj4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23b65c710be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.624 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.624 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" iface="eth0" netns="" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.624 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.624 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.649 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.649 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.649 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.654 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.654 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" HandleID="k8s-pod-network.9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" Workload="localhost-k8s-coredns--7db6d8ff4d--8cj4n-eth0" May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.656 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:11.660993 containerd[1571]: 2025-05-13 00:22:11.658 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd" May 13 00:22:11.661537 containerd[1571]: time="2025-05-13T00:22:11.661044507Z" level=info msg="TearDown network for sandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" successfully" May 13 00:22:11.667231 containerd[1571]: time="2025-05-13T00:22:11.667205349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:11.667322 containerd[1571]: time="2025-05-13T00:22:11.667260743Z" level=info msg="RemovePodSandbox \"9fd7b724ef60ad32879366394d95a23d7375ba19f9357f4ae0691ed74b9934cd\" returns successfully" May 13 00:22:11.667774 containerd[1571]: time="2025-05-13T00:22:11.667750852Z" level=info msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.700 [WARNING][5539] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9d41bbe-3554-41cc-8544-18a0891b3173", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa", Pod:"calico-apiserver-85bb956b5-b9csj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48185b46e30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.700 [INFO][5539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.700 [INFO][5539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" iface="eth0" netns="" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.700 [INFO][5539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.700 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.723 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.723 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.723 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.729 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.729 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.730 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:11.736536 containerd[1571]: 2025-05-13 00:22:11.733 [INFO][5539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.737611 containerd[1571]: time="2025-05-13T00:22:11.736562188Z" level=info msg="TearDown network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" successfully" May 13 00:22:11.737611 containerd[1571]: time="2025-05-13T00:22:11.736585151Z" level=info msg="StopPodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" returns successfully" May 13 00:22:11.737611 containerd[1571]: time="2025-05-13T00:22:11.737051365Z" level=info msg="RemovePodSandbox for \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" May 13 00:22:11.737611 containerd[1571]: time="2025-05-13T00:22:11.737078697Z" level=info msg="Forcibly stopping sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\"" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.772 [WARNING][5568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9d41bbe-3554-41cc-8544-18a0891b3173", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3151e877b4d12fc97eaa6f549711291a643a240216ff2c88cfb058dc5a669ffa", Pod:"calico-apiserver-85bb956b5-b9csj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48185b46e30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.772 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.772 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" iface="eth0" netns="" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.772 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.772 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.793 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.793 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.793 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.797 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.797 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" HandleID="k8s-pod-network.26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" Workload="localhost-k8s-calico--apiserver--85bb956b5--b9csj-eth0" May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.798 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:11.803865 containerd[1571]: 2025-05-13 00:22:11.801 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc" May 13 00:22:11.803865 containerd[1571]: time="2025-05-13T00:22:11.803827271Z" level=info msg="TearDown network for sandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" successfully" May 13 00:22:11.919551 containerd[1571]: time="2025-05-13T00:22:11.919493436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:11.919642 containerd[1571]: time="2025-05-13T00:22:11.919559270Z" level=info msg="RemovePodSandbox \"26a7454a92c7ce9a0051735a4566ebfe8f9ae08e08cd6940469594537915d7cc\" returns successfully" May 13 00:22:11.919958 containerd[1571]: time="2025-05-13T00:22:11.919934284Z" level=info msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.955 [WARNING][5618] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d0adc78-c5f4-4e3e-9481-9603992c8e2a", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2", Pod:"calico-apiserver-85bb956b5-c5nvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad5755bd0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.955 [INFO][5618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.955 [INFO][5618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" iface="eth0" netns="" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.955 [INFO][5618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.955 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.977 [INFO][5627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.977 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.977 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.981 [WARNING][5627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.981 [INFO][5627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.983 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:11.988504 containerd[1571]: 2025-05-13 00:22:11.985 [INFO][5618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:11.988917 containerd[1571]: time="2025-05-13T00:22:11.988559159Z" level=info msg="TearDown network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" successfully" May 13 00:22:11.988917 containerd[1571]: time="2025-05-13T00:22:11.988583975Z" level=info msg="StopPodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" returns successfully" May 13 00:22:11.989111 containerd[1571]: time="2025-05-13T00:22:11.989072793Z" level=info msg="RemovePodSandbox for \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" May 13 00:22:11.989111 containerd[1571]: time="2025-05-13T00:22:11.989108069Z" level=info msg="Forcibly stopping sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\"" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.024 [WARNING][5649] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0", GenerateName:"calico-apiserver-85bb956b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d0adc78-c5f4-4e3e-9481-9603992c8e2a", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85bb956b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62672594793be24b3e17bea29765cf80be371d84c2fd81650c69a916cc0fead2", Pod:"calico-apiserver-85bb956b5-c5nvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad5755bd0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.024 [INFO][5649] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.024 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" iface="eth0" netns="" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.024 [INFO][5649] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.024 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.046 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.046 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.046 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.050 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.050 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" HandleID="k8s-pod-network.5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" Workload="localhost-k8s-calico--apiserver--85bb956b5--c5nvv-eth0" May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.052 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.058236 containerd[1571]: 2025-05-13 00:22:12.054 [INFO][5649] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4" May 13 00:22:12.058236 containerd[1571]: time="2025-05-13T00:22:12.058206980Z" level=info msg="TearDown network for sandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" successfully" May 13 00:22:12.075260 containerd[1571]: time="2025-05-13T00:22:12.075202854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:12.078948 containerd[1571]: time="2025-05-13T00:22:12.078881398Z" level=info msg="RemovePodSandbox \"5c98bcceb9cb0658760d77fa16ea4247c060330a6c2b34b891d280517eefead4\" returns successfully" May 13 00:22:12.079469 containerd[1571]: time="2025-05-13T00:22:12.079427032Z" level=info msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.118 [WARNING][5679] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--595b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89359810-8cb0-453a-816e-e1df193c8474", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492", Pod:"csi-node-driver-595b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia10c108eea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.119 [INFO][5679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.119 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" iface="eth0" netns="" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.119 [INFO][5679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.119 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.141 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.141 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.141 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.145 [WARNING][5688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.145 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.147 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.152496 containerd[1571]: 2025-05-13 00:22:12.149 [INFO][5679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.152891 containerd[1571]: time="2025-05-13T00:22:12.152542881Z" level=info msg="TearDown network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" successfully" May 13 00:22:12.152891 containerd[1571]: time="2025-05-13T00:22:12.152567758Z" level=info msg="StopPodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" returns successfully" May 13 00:22:12.153134 containerd[1571]: time="2025-05-13T00:22:12.153077944Z" level=info msg="RemovePodSandbox for \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" May 13 00:22:12.153134 containerd[1571]: time="2025-05-13T00:22:12.153113651Z" level=info msg="Forcibly stopping sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\"" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.186 [WARNING][5710] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--595b2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89359810-8cb0-453a-816e-e1df193c8474", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c60bd1eac1143be89d5b42e8c3bceecc464777d4cd9c4360c93b5f3b7aac492", Pod:"csi-node-driver-595b2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia10c108eea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.186 [INFO][5710] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.186 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" iface="eth0" netns="" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.186 [INFO][5710] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.186 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.208 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.208 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.208 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.213 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.213 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" HandleID="k8s-pod-network.e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" Workload="localhost-k8s-csi--node--driver--595b2-eth0" May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.214 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.219856 containerd[1571]: 2025-05-13 00:22:12.217 [INFO][5710] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8" May 13 00:22:12.220378 containerd[1571]: time="2025-05-13T00:22:12.220332374Z" level=info msg="TearDown network for sandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" successfully" May 13 00:22:12.224154 containerd[1571]: time="2025-05-13T00:22:12.224124050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:12.224206 containerd[1571]: time="2025-05-13T00:22:12.224173663Z" level=info msg="RemovePodSandbox \"e6501e9df36bfc98c93e41bd37e2e6fe07fd1505097fc44ab60aaab37204faf8\" returns successfully" May 13 00:22:12.224746 containerd[1571]: time="2025-05-13T00:22:12.224695101Z" level=info msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.258 [WARNING][5740] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f2474-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b45f344-c84f-4551-94ba-2fb1ef195e11", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390", Pod:"coredns-7db6d8ff4d-f2474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali462595a3c04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.259 [INFO][5740] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.259 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" iface="eth0" netns="" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.259 [INFO][5740] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.259 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.284 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.285 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.285 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.289 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.289 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.291 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.296110 containerd[1571]: 2025-05-13 00:22:12.293 [INFO][5740] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.296605 containerd[1571]: time="2025-05-13T00:22:12.296141667Z" level=info msg="TearDown network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" successfully" May 13 00:22:12.296605 containerd[1571]: time="2025-05-13T00:22:12.296164620Z" level=info msg="StopPodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" returns successfully" May 13 00:22:12.296649 containerd[1571]: time="2025-05-13T00:22:12.296604807Z" level=info msg="RemovePodSandbox for \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" May 13 00:22:12.296649 containerd[1571]: time="2025-05-13T00:22:12.296627058Z" level=info msg="Forcibly stopping sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\"" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.335 [WARNING][5772] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--f2474-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b45f344-c84f-4551-94ba-2fb1ef195e11", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c759070933308e1ba5ec9257bc7fd1685a4270a7505967410bf4924fc4944390", Pod:"coredns-7db6d8ff4d-f2474", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali462595a3c04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.335 [INFO][5772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.335 [INFO][5772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" iface="eth0" netns="" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.335 [INFO][5772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.335 [INFO][5772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.357 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.357 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.357 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.362 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.362 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" HandleID="k8s-pod-network.5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" Workload="localhost-k8s-coredns--7db6d8ff4d--f2474-eth0" May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.363 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.369267 containerd[1571]: 2025-05-13 00:22:12.366 [INFO][5772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d" May 13 00:22:12.369267 containerd[1571]: time="2025-05-13T00:22:12.369234383Z" level=info msg="TearDown network for sandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" successfully" May 13 00:22:12.373109 containerd[1571]: time="2025-05-13T00:22:12.373081823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:12.373170 containerd[1571]: time="2025-05-13T00:22:12.373129682Z" level=info msg="RemovePodSandbox \"5ea8705d6aa8d9287f2f8237e1d5cbfb15d84e39b249410a98b764c06458d38d\" returns successfully" May 13 00:22:12.373604 containerd[1571]: time="2025-05-13T00:22:12.373581481Z" level=info msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.407 [WARNING][5803] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0", GenerateName:"calico-kube-controllers-58fc868db6-", Namespace:"calico-system", SelfLink:"", UID:"ada53867-29c0-47a2-9c8f-49c75067c4ab", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58fc868db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a", Pod:"calico-kube-controllers-58fc868db6-xgp24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78aa4161ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.407 [INFO][5803] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.407 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" iface="eth0" netns="" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.407 [INFO][5803] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.407 [INFO][5803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.428 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.428 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.428 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.433 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.433 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.434 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.439620 containerd[1571]: 2025-05-13 00:22:12.436 [INFO][5803] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.440013 containerd[1571]: time="2025-05-13T00:22:12.439650125Z" level=info msg="TearDown network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" successfully" May 13 00:22:12.440013 containerd[1571]: time="2025-05-13T00:22:12.439676404Z" level=info msg="StopPodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" returns successfully" May 13 00:22:12.440210 containerd[1571]: time="2025-05-13T00:22:12.440181561Z" level=info msg="RemovePodSandbox for \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" May 13 00:22:12.440252 containerd[1571]: time="2025-05-13T00:22:12.440218011Z" level=info msg="Forcibly stopping sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\"" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.471 [WARNING][5834] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0", GenerateName:"calico-kube-controllers-58fc868db6-", Namespace:"calico-system", SelfLink:"", UID:"ada53867-29c0-47a2-9c8f-49c75067c4ab", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58fc868db6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f7070fd7d50ba5f3b27e62281b986078cd7296d1dbc67e8d14a839c2ebb8d1a", Pod:"calico-kube-controllers-58fc868db6-xgp24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78aa4161ae6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.472 [INFO][5834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.472 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" iface="eth0" netns="" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.472 [INFO][5834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.472 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.494 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.494 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.494 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.499 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.499 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" HandleID="k8s-pod-network.5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" Workload="localhost-k8s-calico--kube--controllers--58fc868db6--xgp24-eth0" May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.500 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:12.505276 containerd[1571]: 2025-05-13 00:22:12.502 [INFO][5834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa" May 13 00:22:12.505676 containerd[1571]: time="2025-05-13T00:22:12.505327615Z" level=info msg="TearDown network for sandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" successfully" May 13 00:22:12.509034 containerd[1571]: time="2025-05-13T00:22:12.508992161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:22:12.509034 containerd[1571]: time="2025-05-13T00:22:12.509043738Z" level=info msg="RemovePodSandbox \"5d0bc5cd563b4839d8f062cbbced0297d129ea0d94e6535a68c02dd787cccafa\" returns successfully" May 13 00:22:14.414774 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:56346.service - OpenSSH per-connection server daemon (10.0.0.1:56346). May 13 00:22:14.450552 sshd[5849]: Accepted publickey for core from 10.0.0.1 port 56346 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:14.452048 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:14.455618 systemd-logind[1552]: New session 17 of user core. May 13 00:22:14.463631 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:22:14.578564 sshd[5849]: pam_unix(sshd:session): session closed for user core May 13 00:22:14.587656 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:56358.service - OpenSSH per-connection server daemon (10.0.0.1:56358). May 13 00:22:14.588738 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:56346.service: Deactivated successfully. May 13 00:22:14.592581 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:22:14.593593 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. May 13 00:22:14.594944 systemd-logind[1552]: Removed session 17. May 13 00:22:14.621761 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 56358 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:14.623285 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:14.627628 systemd-logind[1552]: New session 18 of user core. May 13 00:22:14.638791 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:22:14.829193 sshd[5861]: pam_unix(sshd:session): session closed for user core May 13 00:22:14.837644 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:56370.service - OpenSSH per-connection server daemon (10.0.0.1:56370). May 13 00:22:14.838228 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:56358.service: Deactivated successfully. May 13 00:22:14.841205 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:22:14.841912 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. May 13 00:22:14.843194 systemd-logind[1552]: Removed session 18. May 13 00:22:14.873811 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 56370 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:14.875150 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:14.879509 systemd-logind[1552]: New session 19 of user core. May 13 00:22:14.885673 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:22:16.367284 sshd[5874]: pam_unix(sshd:session): session closed for user core May 13 00:22:16.377915 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). May 13 00:22:16.379588 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:56370.service: Deactivated successfully. May 13 00:22:16.383446 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:22:16.385308 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. May 13 00:22:16.389510 systemd-logind[1552]: Removed session 19. May 13 00:22:16.418963 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:16.420654 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:16.424761 systemd-logind[1552]: New session 20 of user core. May 13 00:22:16.433776 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:22:16.644602 sshd[5900]: pam_unix(sshd:session): session closed for user core May 13 00:22:16.653673 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:56388.service - OpenSSH per-connection server daemon (10.0.0.1:56388). May 13 00:22:16.654243 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:56382.service: Deactivated successfully. May 13 00:22:16.656551 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:22:16.658295 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. May 13 00:22:16.659955 systemd-logind[1552]: Removed session 20. May 13 00:22:16.684187 kubelet[2751]: I0513 00:22:16.684159 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:16.687951 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 56388 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:16.690117 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:16.695462 systemd-logind[1552]: New session 21 of user core. May 13 00:22:16.702909 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:22:16.821111 sshd[5916]: pam_unix(sshd:session): session closed for user core May 13 00:22:16.825427 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:56388.service: Deactivated successfully. May 13 00:22:16.828459 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:22:16.829541 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. May 13 00:22:16.830969 systemd-logind[1552]: Removed session 21. May 13 00:22:20.649732 kubelet[2751]: I0513 00:22:20.649671 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:21.300464 kubelet[2751]: E0513 00:22:21.300431 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.844702 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:45446.service - OpenSSH per-connection server daemon (10.0.0.1:45446). May 13 00:22:21.879864 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 45446 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:21.881695 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:21.885857 systemd-logind[1552]: New session 22 of user core. May 13 00:22:21.896780 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:22:21.998372 sshd[5965]: pam_unix(sshd:session): session closed for user core May 13 00:22:22.002095 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:45446.service: Deactivated successfully. May 13 00:22:22.004188 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. May 13 00:22:22.004274 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:22:22.005341 systemd-logind[1552]: Removed session 22. May 13 00:22:26.529528 kubelet[2751]: E0513 00:22:26.529490 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:27.018683 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:45462.service - OpenSSH per-connection server daemon (10.0.0.1:45462). May 13 00:22:27.062696 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 45462 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:27.064807 sshd[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:27.070359 systemd-logind[1552]: New session 23 of user core. May 13 00:22:27.075757 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:22:27.193833 sshd[5983]: pam_unix(sshd:session): session closed for user core May 13 00:22:27.197857 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:45462.service: Deactivated successfully. May 13 00:22:27.200017 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:22:27.200022 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. May 13 00:22:27.201180 systemd-logind[1552]: Removed session 23. May 13 00:22:28.530335 kubelet[2751]: E0513 00:22:28.530284 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:32.202823 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:57990.service - OpenSSH per-connection server daemon (10.0.0.1:57990). May 13 00:22:32.238689 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 57990 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:32.240550 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:32.244931 systemd-logind[1552]: New session 24 of user core. May 13 00:22:32.252671 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:22:32.381140 sshd[5998]: pam_unix(sshd:session): session closed for user core May 13 00:22:32.385657 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:57990.service: Deactivated successfully. May 13 00:22:32.388477 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:22:32.389304 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. May 13 00:22:32.390582 systemd-logind[1552]: Removed session 24. May 13 00:22:37.389769 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:58000.service - OpenSSH per-connection server daemon (10.0.0.1:58000). May 13 00:22:37.431678 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 58000 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:22:37.433321 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:37.437648 systemd-logind[1552]: New session 25 of user core. May 13 00:22:37.447642 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:22:37.529956 kubelet[2751]: E0513 00:22:37.529907 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:37.530860 kubelet[2751]: E0513 00:22:37.530812 2751 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:37.569760 sshd[6019]: pam_unix(sshd:session): session closed for user core May 13 00:22:37.573246 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:58000.service: Deactivated successfully. May 13 00:22:37.576730 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. May 13 00:22:37.577172 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:22:37.578231 systemd-logind[1552]: Removed session 25. May 13 00:22:38.201852 update_engine[1556]: I20250513 00:22:38.201766 1556 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 00:22:38.201852 update_engine[1556]: I20250513 00:22:38.201838 1556 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 00:22:38.202376 update_engine[1556]: I20250513 00:22:38.202327 1556 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 00:22:38.202979 update_engine[1556]: I20250513 00:22:38.202948 1556 omaha_request_params.cc:62] Current group set to lts May 13 00:22:38.203130 update_engine[1556]: I20250513 00:22:38.203101 1556 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 00:22:38.203130 update_engine[1556]: I20250513 00:22:38.203117 1556 update_attempter.cc:643] Scheduling an action processor start. May 13 00:22:38.203196 update_engine[1556]: I20250513 00:22:38.203140 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 00:22:38.203196 update_engine[1556]: I20250513 00:22:38.203188 1556 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 00:22:38.203292 update_engine[1556]: I20250513 00:22:38.203258 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 00:22:38.203292 update_engine[1556]: I20250513 00:22:38.203279 1556 omaha_request_action.cc:272] Request: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: May 13 00:22:38.203292 update_engine[1556]: I20250513 00:22:38.203289 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 00:22:38.205871 update_engine[1556]: I20250513 00:22:38.205824 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 00:22:38.206179 update_engine[1556]: I20250513 00:22:38.206131 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 00:22:38.207010 locksmithd[1596]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 00:22:38.217633 update_engine[1556]: E20250513 00:22:38.217594 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 00:22:38.217721 update_engine[1556]: I20250513 00:22:38.217653 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 1