May 13 00:29:27.871907 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:29:27.871929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:29:27.871940 kernel: BIOS-provided physical RAM map: May 13 00:29:27.871946 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:29:27.871952 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:29:27.871958 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:29:27.871965 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:29:27.871971 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:29:27.871977 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:29:27.871986 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:29:27.871992 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:29:27.871998 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:29:27.872004 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:29:27.872010 kernel: NX (Execute Disable) protection: active May 13 00:29:27.872018 kernel: APIC: Static calls initialized May 13 00:29:27.872027 kernel: SMBIOS 2.8 present. May 13 00:29:27.872034 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:29:27.872040 kernel: Hypervisor detected: KVM May 13 00:29:27.872047 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:29:27.872053 kernel: kvm-clock: using sched offset of 2185993859 cycles May 13 00:29:27.872060 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:29:27.872067 kernel: tsc: Detected 2794.748 MHz processor May 13 00:29:27.872074 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:29:27.872081 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:29:27.872088 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:29:27.872098 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 00:29:27.872104 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:29:27.872111 kernel: Using GB pages for direct mapping May 13 00:29:27.872118 kernel: ACPI: Early table checksum verification disabled May 13 00:29:27.872125 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:29:27.872132 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872139 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872146 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872155 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:29:27.872162 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872190 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872197 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872204 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:29:27.872211 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:29:27.872218 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:29:27.872229 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:29:27.872238 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:29:27.872245 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:29:27.872253 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:29:27.872260 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:29:27.872267 kernel: No NUMA configuration found May 13 00:29:27.872274 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:29:27.872281 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:29:27.872290 kernel: Zone ranges: May 13 00:29:27.872297 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:29:27.872305 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:29:27.872312 kernel: Normal empty May 13 00:29:27.872319 kernel: Movable zone start for each node May 13 00:29:27.872326 kernel: Early memory node ranges May 13 00:29:27.872333 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:29:27.872340 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:29:27.872347 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:29:27.872357 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:29:27.872364 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:29:27.872371 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:29:27.872378 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:29:27.872385 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:29:27.872392 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:29:27.872399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:29:27.872406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:29:27.872413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:29:27.872423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:29:27.872430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:29:27.872437 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:29:27.872444 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:29:27.872451 kernel: TSC deadline timer available May 13 00:29:27.872458 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:29:27.872465 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 00:29:27.872472 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:29:27.872479 kernel: kvm-guest: setup PV sched yield May 13 00:29:27.872486 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:29:27.872496 kernel: Booting paravirtualized kernel on KVM May 13 00:29:27.872504 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:29:27.872511 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 00:29:27.872518 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 00:29:27.872525 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 00:29:27.872532 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:29:27.872539 kernel: kvm-guest: PV spinlocks enabled May 13 00:29:27.872546 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:29:27.872555 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:29:27.872565 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:29:27.872572 kernel: random: crng init done May 13 00:29:27.872579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:29:27.872586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:29:27.872593 kernel: Fallback order for Node 0: 0 May 13 00:29:27.872600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:29:27.872607 kernel: Policy zone: DMA32 May 13 00:29:27.872614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:29:27.872624 kernel: Memory: 2434584K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 136908K reserved, 0K cma-reserved) May 13 00:29:27.872631 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:29:27.872639 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:29:27.872646 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:29:27.872653 kernel: Dynamic Preempt: voluntary May 13 00:29:27.872660 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:29:27.872672 kernel: rcu: RCU event tracing is enabled. May 13 00:29:27.872681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:29:27.872689 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:29:27.872700 kernel: Rude variant of Tasks RCU enabled. May 13 00:29:27.872708 kernel: Tracing variant of Tasks RCU enabled. May 13 00:29:27.872715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:29:27.872722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:29:27.872729 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:29:27.872736 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:29:27.872743 kernel: Console: colour VGA+ 80x25 May 13 00:29:27.872750 kernel: printk: console [ttyS0] enabled May 13 00:29:27.872757 kernel: ACPI: Core revision 20230628 May 13 00:29:27.872767 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:29:27.872774 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:29:27.872781 kernel: x2apic enabled May 13 00:29:27.872788 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:29:27.872796 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 00:29:27.872803 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 00:29:27.872810 kernel: kvm-guest: setup PV IPIs May 13 00:29:27.872827 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:29:27.872843 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:29:27.872850 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:29:27.872858 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:29:27.872865 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:29:27.872875 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:29:27.872883 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:29:27.872890 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:29:27.872898 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:29:27.872905 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:29:27.872915 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:29:27.872923 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:29:27.872930 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:29:27.872938 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 00:29:27.872946 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 00:29:27.872953 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 00:29:27.872961 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:29:27.872968 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:29:27.872979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:29:27.872986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:29:27.872994 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:29:27.873001 kernel: Freeing SMP alternatives memory: 32K May 13 00:29:27.873009 kernel: pid_max: default: 32768 minimum: 301 May 13 00:29:27.873016 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:29:27.873023 kernel: landlock: Up and running. May 13 00:29:27.873031 kernel: SELinux: Initializing. May 13 00:29:27.873038 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:29:27.873048 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:29:27.873056 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:29:27.873063 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:29:27.873071 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:29:27.873079 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:29:27.873086 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:29:27.873093 kernel: ... version: 0 May 13 00:29:27.873101 kernel: ... bit width: 48 May 13 00:29:27.873108 kernel: ... generic registers: 6 May 13 00:29:27.873118 kernel: ... value mask: 0000ffffffffffff May 13 00:29:27.873125 kernel: ... max period: 00007fffffffffff May 13 00:29:27.873133 kernel: ... fixed-purpose events: 0 May 13 00:29:27.873140 kernel: ... event mask: 000000000000003f May 13 00:29:27.873147 kernel: signal: max sigframe size: 1776 May 13 00:29:27.873155 kernel: rcu: Hierarchical SRCU implementation. May 13 00:29:27.873163 kernel: rcu: Max phase no-delay instances is 400. May 13 00:29:27.873271 kernel: smp: Bringing up secondary CPUs ... May 13 00:29:27.873279 kernel: smpboot: x86: Booting SMP configuration: May 13 00:29:27.873289 kernel: .... node #0, CPUs: #1 #2 #3 May 13 00:29:27.873297 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:29:27.873304 kernel: smpboot: Max logical packages: 1 May 13 00:29:27.873312 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:29:27.873319 kernel: devtmpfs: initialized May 13 00:29:27.873326 kernel: x86/mm: Memory block size: 128MB May 13 00:29:27.873334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:29:27.873341 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:29:27.873349 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:29:27.873358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:29:27.873366 kernel: audit: initializing netlink subsys (disabled) May 13 00:29:27.873373 kernel: audit: type=2000 audit(1747096167.466:1): state=initialized audit_enabled=0 res=1 May 13 00:29:27.873380 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:29:27.873388 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:29:27.873395 kernel: cpuidle: using governor menu May 13 00:29:27.873403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:29:27.873410 kernel: dca service started, version 1.12.1 May 13 00:29:27.873418 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:29:27.873428 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 00:29:27.873435 kernel: PCI: Using configuration type 1 for base access May 13 00:29:27.873443 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:29:27.873450 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:29:27.873457 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:29:27.873465 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:29:27.873472 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:29:27.873480 kernel: ACPI: Added _OSI(Module Device) May 13 00:29:27.873487 kernel: ACPI: Added _OSI(Processor Device) May 13 00:29:27.873497 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:29:27.873504 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:29:27.873512 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:29:27.873519 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:29:27.873526 kernel: ACPI: Interpreter enabled May 13 00:29:27.873534 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:29:27.873541 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:29:27.873549 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:29:27.873556 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:29:27.873566 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:29:27.873573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:29:27.873749 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:29:27.873907 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:29:27.874029 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:29:27.874040 kernel: PCI host bridge to bus 0000:00 May 13 00:29:27.874181 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:29:27.874300 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:29:27.874409 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:29:27.874518 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:29:27.874629 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:29:27.874737 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:29:27.874854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:29:27.874991 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:29:27.875126 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:29:27.875274 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:29:27.875395 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:29:27.875513 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:29:27.875632 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:29:27.875765 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:29:27.875902 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:29:27.876022 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:29:27.876142 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:29:27.876288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:29:27.876407 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:29:27.876527 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:29:27.876646 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:29:27.876779 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:29:27.876908 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:29:27.877027 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:29:27.877180 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:29:27.877376 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:29:27.877508 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:29:27.877629 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:29:27.877760 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:29:27.877888 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:29:27.878007 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:29:27.878137 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:29:27.878269 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:29:27.878280 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:29:27.878288 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:29:27.878300 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:29:27.878307 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:29:27.878315 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:29:27.878322 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:29:27.878330 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:29:27.878338 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:29:27.878345 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:29:27.878353 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:29:27.878360 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:29:27.878370 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:29:27.878378 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:29:27.878385 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:29:27.878392 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:29:27.878400 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:29:27.878407 kernel: iommu: Default domain type: Translated May 13 00:29:27.878415 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:29:27.878422 kernel: PCI: Using ACPI for IRQ routing May 13 00:29:27.878430 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:29:27.878440 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:29:27.878447 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:29:27.878641 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:29:27.878811 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:29:27.878967 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:29:27.878979 kernel: vgaarb: loaded May 13 00:29:27.878994 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:29:27.879009 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:29:27.879022 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:29:27.879029 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:29:27.879037 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:29:27.879045 kernel: pnp: PnP ACPI init May 13 00:29:27.879194 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:29:27.879205 kernel: pnp: PnP ACPI: found 6 devices May 13 00:29:27.879213 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:29:27.879221 kernel: NET: Registered PF_INET protocol family May 13 00:29:27.879232 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:29:27.879240 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:29:27.879247 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:29:27.879255 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:29:27.879263 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:29:27.879270 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:29:27.879278 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:29:27.879286 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:29:27.879293 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:29:27.879304 kernel: NET: Registered PF_XDP protocol family May 13 00:29:27.879415 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:29:27.879525 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:29:27.879634 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:29:27.879744 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:29:27.879860 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:29:27.879970 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:29:27.879980 kernel: PCI: CLS 0 bytes, default 64 May 13 00:29:27.879991 kernel: Initialise system trusted keyrings May 13 00:29:27.879999 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:29:27.880006 kernel: Key type asymmetric registered May 13 00:29:27.880014 kernel: Asymmetric key parser 'x509' registered May 13 00:29:27.880021 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:29:27.880029 kernel: io scheduler mq-deadline registered May 13 00:29:27.880036 kernel: io scheduler kyber registered May 13 00:29:27.880044 kernel: io scheduler bfq registered May 13 00:29:27.880051 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:29:27.880062 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:29:27.880070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:29:27.880077 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:29:27.880085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:29:27.880092 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:29:27.880100 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:29:27.880108 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:29:27.880115 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:29:27.880308 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:29:27.880325 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:29:27.880438 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:29:27.880553 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:29:27 UTC (1747096167) May 13 00:29:27.880666 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:29:27.880676 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 00:29:27.880683 kernel: NET: Registered PF_INET6 protocol family May 13 00:29:27.880691 kernel: Segment Routing with IPv6 May 13 00:29:27.880698 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:29:27.880709 kernel: NET: Registered PF_PACKET protocol family May 13 00:29:27.880717 kernel: Key type dns_resolver registered May 13 00:29:27.880724 kernel: IPI shorthand broadcast: enabled May 13 00:29:27.880732 kernel: sched_clock: Marking stable (607002830, 104234328)->(725087727, -13850569) May 13 00:29:27.880739 kernel: registered taskstats version 1 May 13 00:29:27.880747 kernel: Loading compiled-in X.509 certificates May 13 00:29:27.880755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:29:27.880762 kernel: Key type .fscrypt registered May 13 00:29:27.880769 kernel: Key type fscrypt-provisioning registered May 13 00:29:27.880780 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:29:27.880787 kernel: ima: Allocated hash algorithm: sha1 May 13 00:29:27.880795 kernel: ima: No architecture policies found May 13 00:29:27.880802 kernel: clk: Disabling unused clocks May 13 00:29:27.880810 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:29:27.880818 kernel: Write protecting the kernel read-only data: 36864k May 13 00:29:27.880825 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:29:27.880842 kernel: Run /init as init process May 13 00:29:27.880850 kernel: with arguments: May 13 00:29:27.880860 kernel: /init May 13 00:29:27.880868 kernel: with environment: May 13 00:29:27.880875 kernel: HOME=/ May 13 00:29:27.880882 kernel: TERM=linux May 13 00:29:27.880890 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:29:27.880899 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:29:27.880909 systemd[1]: Detected virtualization kvm. May 13 00:29:27.880918 systemd[1]: Detected architecture x86-64. May 13 00:29:27.880928 systemd[1]: Running in initrd. May 13 00:29:27.880936 systemd[1]: No hostname configured, using default hostname. May 13 00:29:27.880943 systemd[1]: Hostname set to . May 13 00:29:27.880952 systemd[1]: Initializing machine ID from VM UUID. May 13 00:29:27.880960 systemd[1]: Queued start job for default target initrd.target. May 13 00:29:27.880968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:29:27.880976 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:29:27.880985 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:29:27.880996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:29:27.881015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:29:27.881026 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:29:27.881036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:29:27.881047 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:29:27.881055 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:29:27.881063 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:29:27.881072 systemd[1]: Reached target paths.target - Path Units. May 13 00:29:27.881080 systemd[1]: Reached target slices.target - Slice Units. May 13 00:29:27.881098 systemd[1]: Reached target swap.target - Swaps. May 13 00:29:27.881114 systemd[1]: Reached target timers.target - Timer Units. May 13 00:29:27.881130 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:29:27.881139 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:29:27.881151 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:29:27.881159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:29:27.881222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:29:27.881230 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:29:27.881239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:29:27.881247 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:29:27.881255 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:29:27.881264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:29:27.881272 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:29:27.881283 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:29:27.881292 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:29:27.881300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:29:27.881311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:29:27.881319 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:29:27.881327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:29:27.881336 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:29:27.881365 systemd-journald[192]: Collecting audit messages is disabled. May 13 00:29:27.881387 systemd-journald[192]: Journal started May 13 00:29:27.881407 systemd-journald[192]: Runtime Journal (/run/log/journal/e5ca80d5f50746ce94a3f9534be261ab) is 6.0M, max 48.4M, 42.3M free. May 13 00:29:27.882429 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:29:27.879045 systemd-modules-load[194]: Inserted module 'overlay' May 13 00:29:27.915576 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:29:27.915599 kernel: Bridge firewalling registered May 13 00:29:27.908191 systemd-modules-load[194]: Inserted module 'br_netfilter' May 13 00:29:27.917331 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:29:27.918728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:29:27.921111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:29:27.923645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:29:27.938319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:29:27.941755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:29:27.944764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:29:27.947900 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:29:27.955232 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:29:27.958020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:29:27.960325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:29:27.961702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:29:27.976308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:29:27.979522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:29:27.986399 dracut-cmdline[228]: dracut-dracut-053 May 13 00:29:27.989613 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:29:28.012624 systemd-resolved[231]: Positive Trust Anchors: May 13 00:29:28.012646 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:29:28.012678 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:29:28.015212 systemd-resolved[231]: Defaulting to hostname 'linux'. May 13 00:29:28.016313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:29:28.022792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:29:28.074207 kernel: SCSI subsystem initialized May 13 00:29:28.084189 kernel: Loading iSCSI transport class v2.0-870. May 13 00:29:28.095205 kernel: iscsi: registered transport (tcp) May 13 00:29:28.115522 kernel: iscsi: registered transport (qla4xxx) May 13 00:29:28.115569 kernel: QLogic iSCSI HBA Driver May 13 00:29:28.171116 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:29:28.185391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:29:28.211462 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:29:28.211548 kernel: device-mapper: uevent: version 1.0.3 May 13 00:29:28.212578 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:29:28.253199 kernel: raid6: avx2x4 gen() 30093 MB/s May 13 00:29:28.270195 kernel: raid6: avx2x2 gen() 30832 MB/s May 13 00:29:28.287282 kernel: raid6: avx2x1 gen() 25898 MB/s May 13 00:29:28.287317 kernel: raid6: using algorithm avx2x2 gen() 30832 MB/s May 13 00:29:28.305301 kernel: raid6: .... xor() 19833 MB/s, rmw enabled May 13 00:29:28.305345 kernel: raid6: using avx2x2 recovery algorithm May 13 00:29:28.325206 kernel: xor: automatically using best checksumming function avx May 13 00:29:28.478204 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:29:28.489357 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:29:28.502345 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:29:28.513899 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 13 00:29:28.518657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:29:28.532300 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:29:28.545719 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 13 00:29:28.573536 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:29:28.581357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:29:28.643887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:29:28.657515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:29:28.668088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:29:28.670156 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:29:28.675033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:29:28.676374 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:29:28.683194 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 00:29:28.687226 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:29:28.688595 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:29:28.697250 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:29:28.697268 kernel: GPT:9289727 != 19775487 May 13 00:29:28.697286 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:29:28.697298 kernel: GPT:9289727 != 19775487 May 13 00:29:28.697309 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:29:28.697321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:29:28.699186 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:29:28.709194 kernel: libata version 3.00 loaded. May 13 00:29:28.702323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:29:28.712206 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:29:28.712233 kernel: AES CTR mode by8 optimization enabled May 13 00:29:28.715185 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:29:28.715391 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:29:28.716000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:29:28.721682 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:29:28.721898 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:29:28.718144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:29:28.723960 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:29:28.727872 kernel: scsi host0: ahci May 13 00:29:28.727215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:29:28.727372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:29:28.735701 kernel: scsi host1: ahci May 13 00:29:28.735920 kernel: scsi host2: ahci May 13 00:29:28.736102 kernel: scsi host3: ahci May 13 00:29:28.736319 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (472) May 13 00:29:28.736336 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (459) May 13 00:29:28.736350 kernel: scsi host4: ahci May 13 00:29:28.734240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:29:28.745289 kernel: scsi host5: ahci May 13 00:29:28.745464 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:29:28.745476 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:29:28.745486 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:29:28.745495 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:29:28.745509 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:29:28.745519 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:29:28.752407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:29:28.761090 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:29:28.776117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:29:28.800392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:29:28.800636 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:29:28.803608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:29:28.811661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:29:28.823310 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:29:28.824205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:29:28.837811 disk-uuid[555]: Primary Header is updated. May 13 00:29:28.837811 disk-uuid[555]: Secondary Entries is updated. May 13 00:29:28.837811 disk-uuid[555]: Secondary Header is updated. May 13 00:29:28.841196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:29:28.845027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:29:28.847188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:29:28.850202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:29:29.056213 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:29:29.056259 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:29:29.056275 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:29:29.057193 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:29:29.058193 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:29:29.059193 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:29:29.059219 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:29:29.060367 kernel: ata3.00: applying bridge limits May 13 00:29:29.061194 kernel: ata3.00: configured for UDMA/100 May 13 00:29:29.061220 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:29:29.109728 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:29:29.109959 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:29:29.122203 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:29:29.850191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:29:29.850594 disk-uuid[564]: The operation has completed successfully. May 13 00:29:29.876566 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:29:29.876684 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:29:29.908318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:29:29.912049 sh[593]: Success May 13 00:29:29.926194 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:29:29.957624 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:29:29.969695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:29:29.972740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:29:29.983195 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:29:29.983225 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:29:29.983237 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:29:29.984209 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:29:29.985546 kernel: BTRFS info (device dm-0): using free space tree May 13 00:29:29.989457 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:29:29.990350 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:29:30.001287 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:29:30.002824 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:29:30.012546 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:29:30.012579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:29:30.012590 kernel: BTRFS info (device vda6): using free space tree May 13 00:29:30.015269 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:29:30.023637 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:29:30.025327 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:29:30.033699 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:29:30.043346 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:29:30.097229 ignition[693]: Ignition 2.19.0 May 13 00:29:30.097756 ignition[693]: Stage: fetch-offline May 13 00:29:30.097803 ignition[693]: no configs at "/usr/lib/ignition/base.d" May 13 00:29:30.097813 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:30.097899 ignition[693]: parsed url from cmdline: "" May 13 00:29:30.097903 ignition[693]: no config URL provided May 13 00:29:30.097908 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:29:30.097916 ignition[693]: no config at "/usr/lib/ignition/user.ign" May 13 00:29:30.097940 ignition[693]: op(1): [started] loading QEMU firmware config module May 13 00:29:30.097945 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:29:30.106785 ignition[693]: op(1): [finished] loading QEMU firmware config module May 13 00:29:30.116570 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:29:30.126333 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:29:30.147763 systemd-networkd[782]: lo: Link UP May 13 00:29:30.147777 systemd-networkd[782]: lo: Gained carrier May 13 00:29:30.148613 ignition[693]: parsing config with SHA512: e8b75fccad3392a7658d93168589815d8504a3200d899a380a2b161609e6ae6c798690220262a25f6e947ff11804c7cdf519ddb00579667351dba5033737d270 May 13 00:29:30.150777 systemd-networkd[782]: Enumeration completed May 13 00:29:30.151612 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:29:30.151680 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:29:30.151684 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:29:30.152195 ignition[693]: fetch-offline: fetch-offline passed May 13 00:29:30.151857 unknown[693]: fetched base config from "system" May 13 00:29:30.152254 ignition[693]: Ignition finished successfully May 13 00:29:30.151864 unknown[693]: fetched user config from "qemu" May 13 00:29:30.152851 systemd-networkd[782]: eth0: Link UP May 13 00:29:30.152855 systemd-networkd[782]: eth0: Gained carrier May 13 00:29:30.152861 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:29:30.163852 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:29:30.167187 systemd[1]: Reached target network.target - Network. May 13 00:29:30.169034 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:29:30.177210 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:29:30.180305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:29:30.193891 ignition[785]: Ignition 2.19.0 May 13 00:29:30.193902 ignition[785]: Stage: kargs May 13 00:29:30.194051 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 13 00:29:30.194062 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:30.194794 ignition[785]: kargs: kargs passed May 13 00:29:30.194835 ignition[785]: Ignition finished successfully May 13 00:29:30.201595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:29:30.215275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:29:30.227218 ignition[794]: Ignition 2.19.0 May 13 00:29:30.227226 ignition[794]: Stage: disks May 13 00:29:30.227387 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 13 00:29:30.227400 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:30.228135 ignition[794]: disks: disks passed May 13 00:29:30.228190 ignition[794]: Ignition finished successfully May 13 00:29:30.234036 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:29:30.236153 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:29:30.238327 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:29:30.238557 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:29:30.241053 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:29:30.243499 systemd[1]: Reached target basic.target - Basic System. May 13 00:29:30.257286 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:29:30.270990 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:29:30.276774 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:29:30.287286 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:29:30.368195 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:29:30.368608 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:29:30.369687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:29:30.379236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:29:30.380550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:29:30.382492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:29:30.382528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:29:30.394089 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 13 00:29:30.394114 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:29:30.394129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:29:30.394142 kernel: BTRFS info (device vda6): using free space tree May 13 00:29:30.382547 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:29:30.397516 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:29:30.388499 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:29:30.394777 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:29:30.399201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:29:30.429554 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:29:30.434199 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 13 00:29:30.437485 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:29:30.440869 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:29:30.514343 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:29:30.524295 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:29:30.525864 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:29:30.533202 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:29:30.548449 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:29:30.552372 ignition[929]: INFO : Ignition 2.19.0 May 13 00:29:30.552372 ignition[929]: INFO : Stage: mount May 13 00:29:30.553974 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:29:30.553974 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:30.557098 ignition[929]: INFO : mount: mount passed May 13 00:29:30.557856 ignition[929]: INFO : Ignition finished successfully May 13 00:29:30.560407 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:29:30.569282 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:29:30.982405 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:29:30.996305 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:29:31.002196 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) May 13 00:29:31.004262 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:29:31.004284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:29:31.004303 kernel: BTRFS info (device vda6): using free space tree May 13 00:29:31.007193 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:29:31.008425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:29:31.032245 ignition[959]: INFO : Ignition 2.19.0 May 13 00:29:31.032245 ignition[959]: INFO : Stage: files May 13 00:29:31.033974 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:29:31.033974 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:31.033974 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 13 00:29:31.033974 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:29:31.033974 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:29:31.040242 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:29:31.040242 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:29:31.040242 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:29:31.040242 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:29:31.040242 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:29:31.036310 unknown[959]: wrote ssh authorized keys file for user: core May 13 00:29:31.103450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:29:31.243299 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:29:31.243299 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:29:31.247074 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:29:31.248782 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:29:31.250520 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:29:31.252203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:29:31.254361 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:29:31.254361 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:29:31.257860 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:29:31.259686 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:29:31.261617 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:29:31.263403 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:29:31.265975 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:29:31.268395 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:29:31.270499 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:29:31.651632 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:29:32.011354 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:29:32.011354 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:29:32.015163 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:29:32.017041 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:29:32.047836 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:29:32.054837 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:29:32.056418 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:29:32.056418 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:29:32.056418 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:29:32.056418 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:29:32.056418 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:29:32.056418 ignition[959]: INFO : files: files passed May 13 00:29:32.056418 ignition[959]: INFO : Ignition finished successfully May 13 00:29:32.063507 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:29:32.074311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:29:32.076996 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:29:32.078877 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:29:32.078982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:29:32.086753 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:29:32.089365 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:29:32.091006 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:29:32.093765 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:29:32.091950 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:29:32.094337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:29:32.107341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:29:32.130799 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:29:32.130919 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:29:32.133292 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:29:32.135364 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:29:32.137398 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:29:32.150350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:29:32.155064 systemd-networkd[782]: eth0: Gained IPv6LL May 13 00:29:32.163895 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:29:32.166528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:29:32.179088 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:29:32.180403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:29:32.182719 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:29:32.184759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:29:32.184875 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:29:32.187133 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:29:32.189028 systemd[1]: Stopped target basic.target - Basic System. May 13 00:29:32.191437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:29:32.193923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:29:32.196359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:29:32.199056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:29:32.201745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:29:32.204636 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:29:32.207183 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:29:32.209958 systemd[1]: Stopped target swap.target - Swaps. May 13 00:29:32.212230 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:29:32.212337 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:29:32.214528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:29:32.216178 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:29:32.218273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:29:32.218409 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:29:32.220550 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:29:32.220655 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:29:32.222871 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:29:32.222978 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:29:32.225024 systemd[1]: Stopped target paths.target - Path Units. May 13 00:29:32.226923 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:29:32.230314 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:29:32.232299 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:29:32.234518 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:29:32.237024 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:29:32.237142 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:29:32.239210 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:29:32.239319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:29:32.241518 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:29:32.241643 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:29:32.244247 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:29:32.244356 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:29:32.253353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:29:32.255543 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:29:32.256555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:29:32.256680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:29:32.259025 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:29:32.259212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:29:32.266921 ignition[1014]: INFO : Ignition 2.19.0 May 13 00:29:32.266921 ignition[1014]: INFO : Stage: umount May 13 00:29:32.266921 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:29:32.266921 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:29:32.266921 ignition[1014]: INFO : umount: umount passed May 13 00:29:32.266921 ignition[1014]: INFO : Ignition finished successfully May 13 00:29:32.264547 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:29:32.264788 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:29:32.268104 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:29:32.268274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:29:32.271512 systemd[1]: Stopped target network.target - Network. May 13 00:29:32.273303 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:29:32.273362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:29:32.275113 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:29:32.275162 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:29:32.277035 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:29:32.277083 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:29:32.279030 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:29:32.279082 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:29:32.281436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:29:32.283759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:29:32.286625 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:29:32.290550 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:29:32.290687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:29:32.293218 systemd-networkd[782]: eth0: DHCPv6 lease lost May 13 00:29:32.294779 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:29:32.294851 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:29:32.297295 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:29:32.297429 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:29:32.300007 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:29:32.300087 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:29:32.306305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:29:32.308308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:29:32.308367 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:29:32.310595 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:29:32.310644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:29:32.312771 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:29:32.312819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:29:32.314037 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:29:32.327255 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:29:32.327385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:29:32.331933 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:29:32.332132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:29:32.334354 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:29:32.334406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:29:32.336446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:29:32.336485 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:29:32.338441 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:29:32.338489 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:29:32.340588 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:29:32.340636 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:29:32.342745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:29:32.342791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:29:32.364340 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:29:32.365486 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:29:32.365547 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:29:32.367890 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:29:32.367938 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:29:32.370149 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:29:32.370220 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:29:32.372617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:29:32.372666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:29:32.375218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:29:32.375328 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:29:32.479793 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:29:32.479934 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:29:32.482238 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:29:32.484006 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:29:32.484059 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:29:32.492308 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:29:32.500128 systemd[1]: Switching root. May 13 00:29:32.529915 systemd-journald[192]: Journal stopped May 13 00:29:33.626480 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 13 00:29:33.626551 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:29:33.626572 kernel: SELinux: policy capability open_perms=1 May 13 00:29:33.626584 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:29:33.626595 kernel: SELinux: policy capability always_check_network=0 May 13 00:29:33.626606 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:29:33.626623 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:29:33.626637 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:29:33.626649 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:29:33.626660 kernel: audit: type=1403 audit(1747096172.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:29:33.626685 systemd[1]: Successfully loaded SELinux policy in 43.648ms. May 13 00:29:33.626710 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.466ms. May 13 00:29:33.626723 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:29:33.626735 systemd[1]: Detected virtualization kvm. May 13 00:29:33.626747 systemd[1]: Detected architecture x86-64. May 13 00:29:33.626762 systemd[1]: Detected first boot. May 13 00:29:33.626774 systemd[1]: Initializing machine ID from VM UUID. May 13 00:29:33.626785 zram_generator::config[1058]: No configuration found. May 13 00:29:33.626798 systemd[1]: Populated /etc with preset unit settings. May 13 00:29:33.626810 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:29:33.626822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:29:33.626834 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:29:33.626846 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:29:33.626865 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:29:33.626880 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:29:33.626892 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:29:33.626903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:29:33.626916 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:29:33.626928 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:29:33.626939 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:29:33.626951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:29:33.626963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:29:33.626977 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:29:33.626990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:29:33.627001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:29:33.627013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:29:33.627025 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:29:33.627036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:29:33.627048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:29:33.627060 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:29:33.627071 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:29:33.627086 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:29:33.627098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:29:33.627110 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:29:33.627123 systemd[1]: Reached target slices.target - Slice Units. May 13 00:29:33.627135 systemd[1]: Reached target swap.target - Swaps. May 13 00:29:33.627146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:29:33.627158 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:29:33.627182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:29:33.627198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:29:33.627211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:29:33.627222 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:29:33.627234 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:29:33.627245 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:29:33.627257 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:29:33.627271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:33.627287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:29:33.627300 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:29:33.627315 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:29:33.627328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:29:33.627340 systemd[1]: Reached target machines.target - Containers. May 13 00:29:33.627352 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:29:33.627364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:29:33.627376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:29:33.627387 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:29:33.627401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:29:33.627416 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:29:33.627428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:29:33.627440 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:29:33.627452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:29:33.627464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:29:33.627476 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:29:33.627487 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:29:33.627499 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:29:33.627511 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:29:33.627525 kernel: fuse: init (API version 7.39) May 13 00:29:33.627536 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:29:33.627548 kernel: loop: module loaded May 13 00:29:33.627560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:29:33.627572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:29:33.627583 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:29:33.627595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:29:33.627606 kernel: ACPI: bus type drm_connector registered May 13 00:29:33.627618 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:29:33.627633 systemd[1]: Stopped verity-setup.service. May 13 00:29:33.627646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:33.627684 systemd-journald[1128]: Collecting audit messages is disabled. May 13 00:29:33.627711 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:29:33.627724 systemd-journald[1128]: Journal started May 13 00:29:33.627745 systemd-journald[1128]: Runtime Journal (/run/log/journal/e5ca80d5f50746ce94a3f9534be261ab) is 6.0M, max 48.4M, 42.3M free. May 13 00:29:33.409845 systemd[1]: Queued start job for default target multi-user.target. May 13 00:29:33.428630 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:29:33.429052 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:29:33.629780 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:29:33.630564 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:29:33.631929 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:29:33.633041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:29:33.634257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:29:33.635491 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:29:33.636759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:29:33.638229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:29:33.639840 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:29:33.640014 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:29:33.641501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:29:33.641677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:29:33.643147 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:29:33.643336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:29:33.644728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:29:33.644893 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:29:33.646430 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:29:33.646598 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:29:33.647980 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:29:33.648146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:29:33.649541 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:29:33.650946 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:29:33.652683 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:29:33.668432 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:29:33.675246 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:29:33.677501 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:29:33.678650 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:29:33.678685 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:29:33.680692 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:29:33.682993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:29:33.685144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:29:33.686303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:29:33.689315 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:29:33.692284 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:29:33.693471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:29:33.696057 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:29:33.697385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:29:33.699044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:29:33.704433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:29:33.705989 systemd-journald[1128]: Time spent on flushing to /var/log/journal/e5ca80d5f50746ce94a3f9534be261ab is 21.270ms for 951 entries. May 13 00:29:33.705989 systemd-journald[1128]: System Journal (/var/log/journal/e5ca80d5f50746ce94a3f9534be261ab) is 8.0M, max 195.6M, 187.6M free. May 13 00:29:33.745186 systemd-journald[1128]: Received client request to flush runtime journal. May 13 00:29:33.745226 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:29:33.711998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:29:33.716842 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:29:33.718721 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:29:33.720302 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:29:33.721924 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:29:33.728147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:29:33.733239 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:29:33.744344 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:29:33.751278 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:29:33.753150 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:29:33.755210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:29:33.760354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:29:33.765730 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:29:33.765746 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. May 13 00:29:33.765759 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. May 13 00:29:33.772341 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:29:33.773001 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:29:33.775040 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:29:33.785832 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:29:33.790293 kernel: loop1: detected capacity change from 0 to 142488 May 13 00:29:33.812155 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:29:33.819354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:29:33.825185 kernel: loop2: detected capacity change from 0 to 140768 May 13 00:29:33.842773 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 13 00:29:33.843096 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. May 13 00:29:33.849044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:29:33.868191 kernel: loop3: detected capacity change from 0 to 210664 May 13 00:29:33.878186 kernel: loop4: detected capacity change from 0 to 142488 May 13 00:29:33.889192 kernel: loop5: detected capacity change from 0 to 140768 May 13 00:29:33.896219 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:29:33.896782 (sd-merge)[1199]: Merged extensions into '/usr'. May 13 00:29:33.901818 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:29:33.901834 systemd[1]: Reloading... May 13 00:29:33.961198 zram_generator::config[1228]: No configuration found. May 13 00:29:34.023750 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:29:34.079632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:29:34.126984 systemd[1]: Reloading finished in 224 ms. May 13 00:29:34.164930 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:29:34.166500 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:29:34.186510 systemd[1]: Starting ensure-sysext.service... May 13 00:29:34.188976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:29:34.196595 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... May 13 00:29:34.196614 systemd[1]: Reloading... May 13 00:29:34.219788 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:29:34.220161 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:29:34.221151 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:29:34.221488 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 13 00:29:34.221570 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 13 00:29:34.224851 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:29:34.224864 systemd-tmpfiles[1263]: Skipping /boot May 13 00:29:34.238457 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:29:34.238522 systemd-tmpfiles[1263]: Skipping /boot May 13 00:29:34.256210 zram_generator::config[1290]: No configuration found. May 13 00:29:34.369804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:29:34.416464 systemd[1]: Reloading finished in 219 ms. May 13 00:29:34.436733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:29:34.449628 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:29:34.458050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:29:34.460524 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:29:34.462781 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:29:34.468241 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:29:34.471756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:29:34.479231 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:29:34.482586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.482763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:29:34.483955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:29:34.489245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:29:34.491664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:29:34.492797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:29:34.494928 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:29:34.497228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.500766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:29:34.500945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:29:34.504340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:29:34.504518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:29:34.509998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:29:34.510660 systemd-udevd[1335]: Using default interface naming scheme 'v255'. May 13 00:29:34.513265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.513527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:29:34.521458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:29:34.527419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:29:34.528798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:29:34.528923 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.529805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:29:34.531665 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:29:34.532716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:29:34.534754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:29:34.534923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:29:34.538880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:29:34.539085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:29:34.539835 augenrules[1359]: No rules May 13 00:29:34.541696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:29:34.548642 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:29:34.559673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:29:34.561944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.562165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:29:34.569250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:29:34.572040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:29:34.585570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:29:34.588059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:29:34.589261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:29:34.591881 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:29:34.598589 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:29:34.599698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:29:34.601201 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:29:34.603241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:29:34.603419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:29:34.604984 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:29:34.605159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:29:34.606937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:29:34.607107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:29:34.609026 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:29:34.609214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:29:34.612705 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:29:34.614941 systemd[1]: Finished ensure-sysext.service. May 13 00:29:34.640682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 00:29:34.645198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) May 13 00:29:34.645332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:29:34.645420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:29:34.651000 systemd-resolved[1333]: Positive Trust Anchors: May 13 00:29:34.651220 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:29:34.651251 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:29:34.653343 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:29:34.656038 systemd-resolved[1333]: Defaulting to hostname 'linux'. May 13 00:29:34.656109 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:29:34.672302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:29:34.674496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:29:34.680193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:29:34.688347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:29:34.689902 systemd-networkd[1397]: lo: Link UP May 13 00:29:34.689916 systemd-networkd[1397]: lo: Gained carrier May 13 00:29:34.691606 systemd-networkd[1397]: Enumeration completed May 13 00:29:34.691735 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:29:34.693002 systemd[1]: Reached target network.target - Network. May 13 00:29:34.694731 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:29:34.694742 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:29:34.695454 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:29:34.695491 systemd-networkd[1397]: eth0: Link UP May 13 00:29:34.695495 systemd-networkd[1397]: eth0: Gained carrier May 13 00:29:34.695505 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:29:34.701238 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 00:29:34.701856 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:29:34.709237 kernel: ACPI: button: Power Button [PWRF] May 13 00:29:34.707232 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:29:34.714104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:29:34.727185 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 13 00:29:34.736489 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:29:34.736860 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:29:34.737034 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:29:34.748420 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:29:35.325816 systemd-resolved[1333]: Clock change detected. Flushing caches. May 13 00:29:35.325843 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:29:35.325883 systemd-timesyncd[1406]: Initial clock synchronization to Tue 2025-05-13 00:29:35.325760 UTC. May 13 00:29:35.327501 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:29:35.405772 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:29:35.405961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:29:35.418165 kernel: kvm_amd: TSC scaling supported May 13 00:29:35.418219 kernel: kvm_amd: Nested Virtualization enabled May 13 00:29:35.418236 kernel: kvm_amd: Nested Paging enabled May 13 00:29:35.418251 kernel: kvm_amd: LBR virtualization supported May 13 00:29:35.419573 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 00:29:35.419596 kernel: kvm_amd: Virtual GIF supported May 13 00:29:35.440753 kernel: EDAC MC: Ver: 3.0.0 May 13 00:29:35.481393 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:29:35.509975 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:29:35.511769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:29:35.518972 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:29:35.554872 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:29:35.556487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:29:35.557637 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:29:35.558822 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:29:35.560093 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:29:35.561587 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:29:35.562845 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:29:35.564107 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:29:35.565354 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:29:35.565380 systemd[1]: Reached target paths.target - Path Units. May 13 00:29:35.566294 systemd[1]: Reached target timers.target - Timer Units. May 13 00:29:35.567798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:29:35.570459 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:29:35.586400 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:29:35.589004 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:29:35.590612 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:29:35.591791 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:29:35.592770 systemd[1]: Reached target basic.target - Basic System. May 13 00:29:35.593763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:29:35.593791 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:29:35.594760 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:29:35.596801 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:29:35.600723 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:29:35.601041 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:29:35.607861 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:29:35.609790 jq[1439]: false May 13 00:29:35.609127 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:29:35.610661 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:29:35.615794 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:29:35.618083 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:29:35.622449 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:29:35.628058 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:29:35.631223 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:29:35.631780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:29:35.632953 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:29:35.637479 dbus-daemon[1438]: [system] SELinux support is enabled May 13 00:29:35.638789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:29:35.640731 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:29:35.641590 extend-filesystems[1440]: Found loop3 May 13 00:29:35.642567 extend-filesystems[1440]: Found loop4 May 13 00:29:35.642567 extend-filesystems[1440]: Found loop5 May 13 00:29:35.642567 extend-filesystems[1440]: Found sr0 May 13 00:29:35.642567 extend-filesystems[1440]: Found vda May 13 00:29:35.648261 extend-filesystems[1440]: Found vda1 May 13 00:29:35.648261 extend-filesystems[1440]: Found vda2 May 13 00:29:35.648261 extend-filesystems[1440]: Found vda3 May 13 00:29:35.648261 extend-filesystems[1440]: Found usr May 13 00:29:35.648261 extend-filesystems[1440]: Found vda4 May 13 00:29:35.648261 extend-filesystems[1440]: Found vda6 May 13 00:29:35.648261 extend-filesystems[1440]: Found vda7 May 13 00:29:35.648261 extend-filesystems[1440]: Found vda9 May 13 00:29:35.648261 extend-filesystems[1440]: Checking size of /dev/vda9 May 13 00:29:35.644222 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:29:35.659131 update_engine[1452]: I20250513 00:29:35.654197 1452 main.cc:92] Flatcar Update Engine starting May 13 00:29:35.659131 update_engine[1452]: I20250513 00:29:35.655858 1452 update_check_scheduler.cc:74] Next update check in 11m26s May 13 00:29:35.662481 jq[1453]: true May 13 00:29:35.662723 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:29:35.662956 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:29:35.663386 extend-filesystems[1440]: Resized partition /dev/vda9 May 13 00:29:35.663404 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:29:35.663600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:29:35.667140 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:29:35.669367 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) May 13 00:29:35.667358 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:29:35.674282 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:29:35.674325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) May 13 00:29:35.683039 jq[1464]: true May 13 00:29:35.690892 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:29:35.690920 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:29:35.691222 systemd-logind[1449]: New seat seat0. May 13 00:29:35.701527 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:29:35.705265 tar[1462]: linux-amd64/helm May 13 00:29:35.711479 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:29:35.713236 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:29:35.718609 systemd[1]: Started update-engine.service - Update Engine. May 13 00:29:35.721649 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:29:35.721649 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:29:35.721649 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:29:35.721154 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:29:35.729481 extend-filesystems[1440]: Resized filesystem in /dev/vda9 May 13 00:29:35.721325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:29:35.722049 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:29:35.722151 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:29:35.731906 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:29:35.736898 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:29:35.737116 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:29:35.762180 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:29:35.767599 bash[1494]: Updated "/home/core/.ssh/authorized_keys" May 13 00:29:35.770044 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:29:35.772458 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:29:35.813279 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:29:35.836887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:29:35.846918 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:29:35.854813 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:29:35.855061 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:29:35.858369 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:29:35.873560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:29:35.882183 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:29:35.884656 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:29:35.886077 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:29:35.915225 containerd[1465]: time="2025-05-13T00:29:35.915121194Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:29:35.938603 containerd[1465]: time="2025-05-13T00:29:35.938266889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.941195 containerd[1465]: time="2025-05-13T00:29:35.941125180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:29:35.941195 containerd[1465]: time="2025-05-13T00:29:35.941183700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:29:35.941195 containerd[1465]: time="2025-05-13T00:29:35.941205751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941402029Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941424171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941494403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941507337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941737639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941752346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941769969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941781631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.941871420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.942133742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:29:35.942587 containerd[1465]: time="2025-05-13T00:29:35.942262974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:29:35.943352 containerd[1465]: time="2025-05-13T00:29:35.942276409Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:29:35.943352 containerd[1465]: time="2025-05-13T00:29:35.942377509Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:29:35.943352 containerd[1465]: time="2025-05-13T00:29:35.942437722Z" level=info msg="metadata content store policy set" policy=shared May 13 00:29:35.948454 containerd[1465]: time="2025-05-13T00:29:35.948407499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:29:35.948491 containerd[1465]: time="2025-05-13T00:29:35.948470448Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:29:35.948512 containerd[1465]: time="2025-05-13T00:29:35.948489223Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:29:35.948512 containerd[1465]: time="2025-05-13T00:29:35.948506816Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:29:35.948561 containerd[1465]: time="2025-05-13T00:29:35.948522666Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:29:35.948715 containerd[1465]: time="2025-05-13T00:29:35.948676895Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:29:35.949021 containerd[1465]: time="2025-05-13T00:29:35.948991635Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949223480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949245551Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949261381Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949278283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949292930Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949306706Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949322436Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949338756Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949356299Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949371187Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949383951Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949406373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949420259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:29:35.949732 containerd[1465]: time="2025-05-13T00:29:35.949433965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949447891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949461256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949475092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949489659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949503195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949517401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949535285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949547398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949559641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949572565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949589346Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949611237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949622949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950067 containerd[1465]: time="2025-05-13T00:29:35.949634261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949685246Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949719230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949730872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949752202Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949762852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949775125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949785945Z" level=info msg="NRI interface is disabled by configuration." May 13 00:29:35.950324 containerd[1465]: time="2025-05-13T00:29:35.949797998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:29:35.950469 containerd[1465]: time="2025-05-13T00:29:35.950052585Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:29:35.950469 containerd[1465]: time="2025-05-13T00:29:35.950111135Z" level=info msg="Connect containerd service" May 13 00:29:35.950469 containerd[1465]: time="2025-05-13T00:29:35.950163493Z" level=info msg="using legacy CRI server" May 13 00:29:35.950469 containerd[1465]: time="2025-05-13T00:29:35.950171027Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:29:35.950469 containerd[1465]: time="2025-05-13T00:29:35.950258612Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:29:35.950852 containerd[1465]: time="2025-05-13T00:29:35.950826216Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:29:35.951031 containerd[1465]: time="2025-05-13T00:29:35.951002146Z" level=info msg="Start subscribing containerd event" May 13 00:29:35.951220 containerd[1465]: time="2025-05-13T00:29:35.951141087Z" level=info msg="Start recovering state" May 13 00:29:35.951220 containerd[1465]: time="2025-05-13T00:29:35.951147960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:29:35.951267 containerd[1465]: time="2025-05-13T00:29:35.951237197Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:29:35.951395 containerd[1465]: time="2025-05-13T00:29:35.951372251Z" level=info msg="Start event monitor" May 13 00:29:35.951631 containerd[1465]: time="2025-05-13T00:29:35.951448463Z" level=info msg="Start snapshots syncer" May 13 00:29:35.951631 containerd[1465]: time="2025-05-13T00:29:35.951462660Z" level=info msg="Start cni network conf syncer for default" May 13 00:29:35.951631 containerd[1465]: time="2025-05-13T00:29:35.951470345Z" level=info msg="Start streaming server" May 13 00:29:35.951631 containerd[1465]: time="2025-05-13T00:29:35.951545856Z" level=info msg="containerd successfully booted in 0.038054s" May 13 00:29:35.951737 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:29:36.092466 tar[1462]: linux-amd64/LICENSE May 13 00:29:36.092653 tar[1462]: linux-amd64/README.md May 13 00:29:36.107004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:29:37.337932 systemd-networkd[1397]: eth0: Gained IPv6LL May 13 00:29:37.341665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:29:37.343769 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:29:37.357005 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:29:37.359833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:29:37.362008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:29:37.386650 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:29:37.388376 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:29:37.388585 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:29:37.391051 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:29:37.989547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:29:37.991193 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:29:37.992789 systemd[1]: Startup finished in 735ms (kernel) + 5.225s (initrd) + 4.540s (userspace) = 10.502s. May 13 00:29:38.004816 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:29:38.422689 kubelet[1551]: E0513 00:29:38.422632 1551 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:29:38.427014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:29:38.427246 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:29:41.328555 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:29:41.329667 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). May 13 00:29:41.598823 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:41.600862 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.609554 systemd-logind[1449]: New session 1 of user core. May 13 00:29:41.610866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:29:41.619935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:29:41.632095 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:29:41.642911 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:29:41.645684 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:29:41.746163 systemd[1569]: Queued start job for default target default.target. May 13 00:29:41.756946 systemd[1569]: Created slice app.slice - User Application Slice. May 13 00:29:41.756971 systemd[1569]: Reached target paths.target - Paths. May 13 00:29:41.756984 systemd[1569]: Reached target timers.target - Timers. May 13 00:29:41.758468 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:29:41.769478 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:29:41.769610 systemd[1569]: Reached target sockets.target - Sockets. May 13 00:29:41.769630 systemd[1569]: Reached target basic.target - Basic System. May 13 00:29:41.769670 systemd[1569]: Reached target default.target - Main User Target. May 13 00:29:41.769723 systemd[1569]: Startup finished in 116ms. May 13 00:29:41.770124 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:29:41.771603 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:29:41.833656 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). May 13 00:29:41.865290 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:41.866799 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.870633 systemd-logind[1449]: New session 2 of user core. May 13 00:29:41.879862 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:29:41.932746 sshd[1580]: pam_unix(sshd:session): session closed for user core May 13 00:29:41.943413 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:34890.service: Deactivated successfully. May 13 00:29:41.945103 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:29:41.946655 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. May 13 00:29:41.947841 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:34902.service - OpenSSH per-connection server daemon (10.0.0.1:34902). May 13 00:29:41.948538 systemd-logind[1449]: Removed session 2. May 13 00:29:41.978778 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 34902 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:41.980653 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.984333 systemd-logind[1449]: New session 3 of user core. May 13 00:29:41.998818 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:29:42.048003 sshd[1587]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.067150 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:34902.service: Deactivated successfully. May 13 00:29:42.068527 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:29:42.070163 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. May 13 00:29:42.082932 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). May 13 00:29:42.083831 systemd-logind[1449]: Removed session 3. May 13 00:29:42.111684 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:42.113248 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:42.117232 systemd-logind[1449]: New session 4 of user core. May 13 00:29:42.127833 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:29:42.181376 sshd[1594]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.189201 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:34906.service: Deactivated successfully. May 13 00:29:42.190689 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:29:42.192236 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. May 13 00:29:42.193389 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:34912.service - OpenSSH per-connection server daemon (10.0.0.1:34912). May 13 00:29:42.194231 systemd-logind[1449]: Removed session 4. May 13 00:29:42.224136 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 34912 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:42.225486 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:42.229099 systemd-logind[1449]: New session 5 of user core. May 13 00:29:42.238801 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:29:42.297992 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:29:42.298327 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:29:42.312792 sudo[1604]: pam_unix(sudo:session): session closed for user root May 13 00:29:42.314844 sshd[1601]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.328185 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:34912.service: Deactivated successfully. May 13 00:29:42.329617 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:29:42.331015 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. May 13 00:29:42.341912 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:34914.service - OpenSSH per-connection server daemon (10.0.0.1:34914). May 13 00:29:42.342737 systemd-logind[1449]: Removed session 5. May 13 00:29:42.368437 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 34914 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:42.370007 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:42.373690 systemd-logind[1449]: New session 6 of user core. May 13 00:29:42.383814 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:29:42.436143 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:29:42.436460 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:29:42.439950 sudo[1613]: pam_unix(sudo:session): session closed for user root May 13 00:29:42.446047 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:29:42.446418 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:29:42.463901 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:29:42.465551 auditctl[1616]: No rules May 13 00:29:42.466821 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:29:42.467069 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:29:42.468736 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:29:42.498329 augenrules[1634]: No rules May 13 00:29:42.500385 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:29:42.501621 sudo[1612]: pam_unix(sudo:session): session closed for user root May 13 00:29:42.503438 sshd[1609]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.514432 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:34914.service: Deactivated successfully. May 13 00:29:42.516185 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:29:42.517769 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. May 13 00:29:42.527926 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:34926.service - OpenSSH per-connection server daemon (10.0.0.1:34926). May 13 00:29:42.528744 systemd-logind[1449]: Removed session 6. May 13 00:29:42.554318 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 34926 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:29:42.555842 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:42.559446 systemd-logind[1449]: New session 7 of user core. May 13 00:29:42.569827 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:29:42.622359 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:29:42.622696 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:29:42.902939 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:29:42.903077 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:29:43.167045 dockerd[1665]: time="2025-05-13T00:29:43.166135043Z" level=info msg="Starting up" May 13 00:29:43.721347 dockerd[1665]: time="2025-05-13T00:29:43.721290426Z" level=info msg="Loading containers: start." May 13 00:29:43.828742 kernel: Initializing XFRM netlink socket May 13 00:29:43.910074 systemd-networkd[1397]: docker0: Link UP May 13 00:29:43.933658 dockerd[1665]: time="2025-05-13T00:29:43.933600800Z" level=info msg="Loading containers: done." May 13 00:29:43.947546 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck637026154-merged.mount: Deactivated successfully. May 13 00:29:43.952848 dockerd[1665]: time="2025-05-13T00:29:43.952795994Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:29:43.952968 dockerd[1665]: time="2025-05-13T00:29:43.952905079Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:29:43.953038 dockerd[1665]: time="2025-05-13T00:29:43.953017770Z" level=info msg="Daemon has completed initialization" May 13 00:29:44.507482 dockerd[1665]: time="2025-05-13T00:29:44.507386448Z" level=info msg="API listen on /run/docker.sock" May 13 00:29:44.507726 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:29:45.276954 containerd[1465]: time="2025-05-13T00:29:45.276910755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:29:46.065331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519114899.mount: Deactivated successfully. May 13 00:29:47.059580 containerd[1465]: time="2025-05-13T00:29:47.059519249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:47.060373 containerd[1465]: time="2025-05-13T00:29:47.060306105Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 00:29:47.061716 containerd[1465]: time="2025-05-13T00:29:47.061658702Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:47.064325 containerd[1465]: time="2025-05-13T00:29:47.064287724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:47.065346 containerd[1465]: time="2025-05-13T00:29:47.065310412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.788356706s" May 13 00:29:47.065387 containerd[1465]: time="2025-05-13T00:29:47.065349515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:29:47.087894 containerd[1465]: time="2025-05-13T00:29:47.087859377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:29:48.589980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:29:48.599890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:29:48.741338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:29:48.745619 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:29:48.802202 kubelet[1893]: E0513 00:29:48.802132 1893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:29:48.809930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:29:48.810182 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:29:49.153718 containerd[1465]: time="2025-05-13T00:29:49.153634631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:49.154773 containerd[1465]: time="2025-05-13T00:29:49.154726890Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 00:29:49.156215 containerd[1465]: time="2025-05-13T00:29:49.156173704Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:49.159546 containerd[1465]: time="2025-05-13T00:29:49.159513188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:49.160473 containerd[1465]: time="2025-05-13T00:29:49.160444364Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.07254904s" May 13 00:29:49.160545 containerd[1465]: time="2025-05-13T00:29:49.160474571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:29:49.186173 containerd[1465]: time="2025-05-13T00:29:49.186113222Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:29:50.163770 containerd[1465]: time="2025-05-13T00:29:50.163714152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:50.164652 containerd[1465]: time="2025-05-13T00:29:50.164612577Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 00:29:50.165807 containerd[1465]: time="2025-05-13T00:29:50.165780938Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:50.168541 containerd[1465]: time="2025-05-13T00:29:50.168498125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:50.169365 containerd[1465]: time="2025-05-13T00:29:50.169334874Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 983.18275ms" May 13 00:29:50.169418 containerd[1465]: time="2025-05-13T00:29:50.169366524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:29:50.191288 containerd[1465]: time="2025-05-13T00:29:50.191257465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:29:51.879413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932562622.mount: Deactivated successfully. May 13 00:29:52.126613 containerd[1465]: time="2025-05-13T00:29:52.126546023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:52.127492 containerd[1465]: time="2025-05-13T00:29:52.127447213Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 00:29:52.128680 containerd[1465]: time="2025-05-13T00:29:52.128634240Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:52.130800 containerd[1465]: time="2025-05-13T00:29:52.130662464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:52.131477 containerd[1465]: time="2025-05-13T00:29:52.131426638Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.940140059s" May 13 00:29:52.131477 containerd[1465]: time="2025-05-13T00:29:52.131461483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:29:52.155970 containerd[1465]: time="2025-05-13T00:29:52.155872120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:29:52.818165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455914042.mount: Deactivated successfully. May 13 00:29:53.918991 containerd[1465]: time="2025-05-13T00:29:53.918911508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:53.919872 containerd[1465]: time="2025-05-13T00:29:53.919810435Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 00:29:53.921137 containerd[1465]: time="2025-05-13T00:29:53.921101366Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:53.924007 containerd[1465]: time="2025-05-13T00:29:53.923971349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:53.925281 containerd[1465]: time="2025-05-13T00:29:53.925241672Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.769333554s" May 13 00:29:53.925327 containerd[1465]: time="2025-05-13T00:29:53.925279944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:29:53.947459 containerd[1465]: time="2025-05-13T00:29:53.947424000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:29:54.511526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193275241.mount: Deactivated successfully. May 13 00:29:54.517995 containerd[1465]: time="2025-05-13T00:29:54.517937450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:54.518860 containerd[1465]: time="2025-05-13T00:29:54.518780932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 00:29:54.520079 containerd[1465]: time="2025-05-13T00:29:54.520034022Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:54.522823 containerd[1465]: time="2025-05-13T00:29:54.522780334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:54.523817 containerd[1465]: time="2025-05-13T00:29:54.523760192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 576.184768ms" May 13 00:29:54.523817 containerd[1465]: time="2025-05-13T00:29:54.523810195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:29:54.547883 containerd[1465]: time="2025-05-13T00:29:54.547837414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:29:55.078531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987831548.mount: Deactivated successfully. May 13 00:29:56.785970 containerd[1465]: time="2025-05-13T00:29:56.785915689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:56.786760 containerd[1465]: time="2025-05-13T00:29:56.786720599Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 00:29:56.787825 containerd[1465]: time="2025-05-13T00:29:56.787797819Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:56.791635 containerd[1465]: time="2025-05-13T00:29:56.791602496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:56.792855 containerd[1465]: time="2025-05-13T00:29:56.792820520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.244949153s" May 13 00:29:56.792904 containerd[1465]: time="2025-05-13T00:29:56.792853672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:29:58.840090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:29:58.856990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:29:59.012691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:29:59.019469 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:29:59.071450 kubelet[2129]: E0513 00:29:59.071380 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:29:59.076632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:29:59.076897 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:30:00.153465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:00.167931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:00.183776 systemd[1]: Reloading requested from client PID 2144 ('systemctl') (unit session-7.scope)... May 13 00:30:00.183791 systemd[1]: Reloading... May 13 00:30:00.286736 zram_generator::config[2186]: No configuration found. May 13 00:30:00.732364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:00.811112 systemd[1]: Reloading finished in 626 ms. May 13 00:30:00.874066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:00.878757 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:30:00.878992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:00.880614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:01.030116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:01.035377 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:30:01.074571 kubelet[2233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:01.074571 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:30:01.074571 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:01.075666 kubelet[2233]: I0513 00:30:01.075609 2233 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:30:01.308051 kubelet[2233]: I0513 00:30:01.307936 2233 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:30:01.308051 kubelet[2233]: I0513 00:30:01.307971 2233 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:30:01.308191 kubelet[2233]: I0513 00:30:01.308177 2233 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:30:01.321499 kubelet[2233]: I0513 00:30:01.321458 2233 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:30:01.324505 kubelet[2233]: E0513 00:30:01.324476 2233 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.334617 kubelet[2233]: I0513 00:30:01.334587 2233 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:30:01.335753 kubelet[2233]: I0513 00:30:01.335698 2233 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:30:01.335901 kubelet[2233]: I0513 00:30:01.335745 2233 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:30:01.336314 kubelet[2233]: I0513 00:30:01.336288 2233 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:30:01.336314 kubelet[2233]: I0513 00:30:01.336305 2233 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:30:01.336465 kubelet[2233]: I0513 00:30:01.336432 2233 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:01.337082 kubelet[2233]: I0513 00:30:01.337043 2233 kubelet.go:400] "Attempting to sync node with API server" May 13 00:30:01.337082 kubelet[2233]: I0513 00:30:01.337069 2233 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:30:01.337133 kubelet[2233]: I0513 00:30:01.337096 2233 kubelet.go:312] "Adding apiserver pod source" May 13 00:30:01.337133 kubelet[2233]: I0513 00:30:01.337119 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:30:01.337585 kubelet[2233]: W0513 00:30:01.337520 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.337627 kubelet[2233]: E0513 00:30:01.337608 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.338393 kubelet[2233]: W0513 00:30:01.337862 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.338393 kubelet[2233]: E0513 00:30:01.337907 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.340579 kubelet[2233]: I0513 00:30:01.340558 2233 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:30:01.341826 kubelet[2233]: I0513 00:30:01.341803 2233 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:30:01.341876 kubelet[2233]: W0513 00:30:01.341866 2233 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:30:01.342728 kubelet[2233]: I0513 00:30:01.342592 2233 server.go:1264] "Started kubelet" May 13 00:30:01.343491 kubelet[2233]: I0513 00:30:01.343284 2233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:30:01.344098 kubelet[2233]: I0513 00:30:01.343616 2233 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:30:01.344098 kubelet[2233]: I0513 00:30:01.343912 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:30:01.344098 kubelet[2233]: I0513 00:30:01.344066 2233 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:30:01.346669 kubelet[2233]: I0513 00:30:01.345030 2233 server.go:455] "Adding debug handlers to kubelet server" May 13 00:30:01.349145 kubelet[2233]: E0513 00:30:01.348543 2233 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:30:01.349145 kubelet[2233]: E0513 00:30:01.348583 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:01.349145 kubelet[2233]: I0513 00:30:01.348604 2233 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:30:01.349145 kubelet[2233]: I0513 00:30:01.348697 2233 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:30:01.349145 kubelet[2233]: I0513 00:30:01.348763 2233 reconciler.go:26] "Reconciler: start to sync state" May 13 00:30:01.349145 kubelet[2233]: E0513 00:30:01.348901 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" May 13 00:30:01.349145 kubelet[2233]: W0513 00:30:01.349089 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.349145 kubelet[2233]: E0513 00:30:01.349131 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.349595 kubelet[2233]: I0513 00:30:01.349566 2233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:30:01.350071 kubelet[2233]: E0513 00:30:01.349967 2233 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eeebd15753bbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:30:01.342565311 +0000 UTC m=+0.303221297,LastTimestamp:2025-05-13 00:30:01.342565311 +0000 UTC m=+0.303221297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:30:01.350794 kubelet[2233]: I0513 00:30:01.350727 2233 factory.go:221] Registration of the containerd container factory successfully May 13 00:30:01.350794 kubelet[2233]: I0513 00:30:01.350747 2233 factory.go:221] Registration of the systemd container factory successfully May 13 00:30:01.364553 kubelet[2233]: I0513 00:30:01.364490 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:30:01.365931 kubelet[2233]: I0513 00:30:01.365911 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:30:01.365977 kubelet[2233]: I0513 00:30:01.365939 2233 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:30:01.365977 kubelet[2233]: I0513 00:30:01.365957 2233 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:30:01.366021 kubelet[2233]: E0513 00:30:01.365994 2233 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:30:01.366613 kubelet[2233]: W0513 00:30:01.366569 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.366654 kubelet[2233]: E0513 00:30:01.366619 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:01.367630 kubelet[2233]: I0513 00:30:01.367505 2233 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:30:01.367630 kubelet[2233]: I0513 00:30:01.367520 2233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:30:01.367630 kubelet[2233]: I0513 00:30:01.367544 2233 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:01.450336 kubelet[2233]: I0513 00:30:01.450309 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:01.450747 kubelet[2233]: E0513 00:30:01.450694 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 13 00:30:01.466873 kubelet[2233]: E0513 00:30:01.466845 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:30:01.549518 kubelet[2233]: E0513 00:30:01.549491 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" May 13 00:30:01.652210 kubelet[2233]: I0513 00:30:01.652156 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:01.652608 kubelet[2233]: E0513 00:30:01.652566 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 13 00:30:01.667683 kubelet[2233]: E0513 00:30:01.667632 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:30:01.951010 kubelet[2233]: E0513 00:30:01.950868 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" May 13 00:30:02.054936 kubelet[2233]: I0513 00:30:02.054889 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:02.055273 kubelet[2233]: E0513 00:30:02.055240 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 13 00:30:02.068474 kubelet[2233]: E0513 00:30:02.068401 2233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:30:02.219828 kubelet[2233]: W0513 00:30:02.219616 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.219828 kubelet[2233]: E0513 00:30:02.219663 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.446405 kubelet[2233]: I0513 00:30:02.446342 2233 policy_none.go:49] "None policy: Start" May 13 00:30:02.447422 kubelet[2233]: I0513 00:30:02.447376 2233 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:30:02.447473 kubelet[2233]: I0513 00:30:02.447430 2233 state_mem.go:35] "Initializing new in-memory state store" May 13 00:30:02.492534 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:30:02.497694 kubelet[2233]: W0513 00:30:02.497638 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.497762 kubelet[2233]: E0513 00:30:02.497697 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.507087 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:30:02.509888 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:30:02.515137 kubelet[2233]: W0513 00:30:02.515078 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.515177 kubelet[2233]: E0513 00:30:02.515142 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.522893 kubelet[2233]: I0513 00:30:02.522855 2233 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:30:02.523140 kubelet[2233]: I0513 00:30:02.523097 2233 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:30:02.523311 kubelet[2233]: I0513 00:30:02.523234 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:30:02.524287 kubelet[2233]: E0513 00:30:02.524267 2233 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:30:02.752086 kubelet[2233]: E0513 00:30:02.751946 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" May 13 00:30:02.856974 kubelet[2233]: I0513 00:30:02.856944 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:02.857372 kubelet[2233]: E0513 00:30:02.857321 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 13 00:30:02.869473 kubelet[2233]: I0513 00:30:02.869416 2233 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:30:02.870721 kubelet[2233]: I0513 00:30:02.870678 2233 topology_manager.go:215] "Topology Admit Handler" podUID="0aa3307148eab2da27045a270be6aa54" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:30:02.871509 kubelet[2233]: I0513 00:30:02.871462 2233 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:30:02.877426 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 00:30:02.891669 systemd[1]: Created slice kubepods-burstable-pod0aa3307148eab2da27045a270be6aa54.slice - libcontainer container kubepods-burstable-pod0aa3307148eab2da27045a270be6aa54.slice. May 13 00:30:02.902452 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 00:30:02.906391 kubelet[2233]: W0513 00:30:02.906336 2233 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.906457 kubelet[2233]: E0513 00:30:02.906405 2233 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:02.956768 kubelet[2233]: I0513 00:30:02.956718 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:02.956805 kubelet[2233]: I0513 00:30:02.956767 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:02.956805 kubelet[2233]: I0513 00:30:02.956787 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:02.956855 kubelet[2233]: I0513 00:30:02.956805 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:02.956855 kubelet[2233]: I0513 00:30:02.956824 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:30:02.956855 kubelet[2233]: I0513 00:30:02.956844 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:02.956929 kubelet[2233]: I0513 00:30:02.956889 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:02.956929 kubelet[2233]: I0513 00:30:02.956919 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:02.956969 kubelet[2233]: I0513 00:30:02.956943 2233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:03.189911 kubelet[2233]: E0513 00:30:03.189858 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.190630 containerd[1465]: time="2025-05-13T00:30:03.190575790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:30:03.200783 kubelet[2233]: E0513 00:30:03.200757 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.201206 containerd[1465]: time="2025-05-13T00:30:03.201160665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aa3307148eab2da27045a270be6aa54,Namespace:kube-system,Attempt:0,}" May 13 00:30:03.204480 kubelet[2233]: E0513 00:30:03.204457 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.205000 containerd[1465]: time="2025-05-13T00:30:03.204957246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:30:03.374969 kubelet[2233]: E0513 00:30:03.374933 2233 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.136:6443: connect: connection refused May 13 00:30:03.688231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476856047.mount: Deactivated successfully. May 13 00:30:03.698295 containerd[1465]: time="2025-05-13T00:30:03.698238431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:03.699515 containerd[1465]: time="2025-05-13T00:30:03.699468668Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:03.700233 containerd[1465]: time="2025-05-13T00:30:03.700181565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:30:03.701420 containerd[1465]: time="2025-05-13T00:30:03.701351480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:03.702514 containerd[1465]: time="2025-05-13T00:30:03.702475728Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:30:03.703438 containerd[1465]: time="2025-05-13T00:30:03.703398469Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:03.704339 containerd[1465]: time="2025-05-13T00:30:03.704296684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:30:03.707076 containerd[1465]: time="2025-05-13T00:30:03.707032586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:30:03.709348 containerd[1465]: time="2025-05-13T00:30:03.709301742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.625263ms" May 13 00:30:03.710182 containerd[1465]: time="2025-05-13T00:30:03.710152618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.122906ms" May 13 00:30:03.711004 containerd[1465]: time="2025-05-13T00:30:03.710969100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.687267ms" May 13 00:30:03.850596 containerd[1465]: time="2025-05-13T00:30:03.850484290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:03.850596 containerd[1465]: time="2025-05-13T00:30:03.850545014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:03.850596 containerd[1465]: time="2025-05-13T00:30:03.850556305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.850823 containerd[1465]: time="2025-05-13T00:30:03.850642367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.851518 containerd[1465]: time="2025-05-13T00:30:03.851407131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:03.852343 containerd[1465]: time="2025-05-13T00:30:03.852250483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:03.852420 containerd[1465]: time="2025-05-13T00:30:03.852315445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:03.852420 containerd[1465]: time="2025-05-13T00:30:03.852342315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.852550 containerd[1465]: time="2025-05-13T00:30:03.852459786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.852550 containerd[1465]: time="2025-05-13T00:30:03.852418188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:03.852550 containerd[1465]: time="2025-05-13T00:30:03.852434779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.852550 containerd[1465]: time="2025-05-13T00:30:03.852502856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:03.876855 systemd[1]: Started cri-containerd-2d7d88d628760b9524040d42f07b193250d99b0b6a41f5e425a4c95bbe946b73.scope - libcontainer container 2d7d88d628760b9524040d42f07b193250d99b0b6a41f5e425a4c95bbe946b73. May 13 00:30:03.880374 systemd[1]: Started cri-containerd-be90e0cb810e3ae4b0ea4b7a3c7edcfe9e1cebcfae2e4d745b69fe976bc8d6a2.scope - libcontainer container be90e0cb810e3ae4b0ea4b7a3c7edcfe9e1cebcfae2e4d745b69fe976bc8d6a2. May 13 00:30:03.882981 systemd[1]: Started cri-containerd-c5c04881a6529a4732ed39632636d00371b3e89882003b1a458ee88755fbccb3.scope - libcontainer container c5c04881a6529a4732ed39632636d00371b3e89882003b1a458ee88755fbccb3. May 13 00:30:03.915112 containerd[1465]: time="2025-05-13T00:30:03.915067861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0aa3307148eab2da27045a270be6aa54,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d7d88d628760b9524040d42f07b193250d99b0b6a41f5e425a4c95bbe946b73\"" May 13 00:30:03.917440 kubelet[2233]: E0513 00:30:03.917327 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.921489 containerd[1465]: time="2025-05-13T00:30:03.921301413Z" level=info msg="CreateContainer within sandbox \"2d7d88d628760b9524040d42f07b193250d99b0b6a41f5e425a4c95bbe946b73\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:30:03.927972 containerd[1465]: time="2025-05-13T00:30:03.927877017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c04881a6529a4732ed39632636d00371b3e89882003b1a458ee88755fbccb3\"" May 13 00:30:03.928295 containerd[1465]: time="2025-05-13T00:30:03.928259665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"be90e0cb810e3ae4b0ea4b7a3c7edcfe9e1cebcfae2e4d745b69fe976bc8d6a2\"" May 13 00:30:03.928763 kubelet[2233]: E0513 00:30:03.928649 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.929482 kubelet[2233]: E0513 00:30:03.929453 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:03.930879 containerd[1465]: time="2025-05-13T00:30:03.930853921Z" level=info msg="CreateContainer within sandbox \"c5c04881a6529a4732ed39632636d00371b3e89882003b1a458ee88755fbccb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:30:03.932058 containerd[1465]: time="2025-05-13T00:30:03.932028754Z" level=info msg="CreateContainer within sandbox \"be90e0cb810e3ae4b0ea4b7a3c7edcfe9e1cebcfae2e4d745b69fe976bc8d6a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:30:03.947318 containerd[1465]: time="2025-05-13T00:30:03.947213757Z" level=info msg="CreateContainer within sandbox \"2d7d88d628760b9524040d42f07b193250d99b0b6a41f5e425a4c95bbe946b73\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03504754b9ef2e6d826a15eee0f744314c335815374ac7d1258c0bed0a81c06f\"" May 13 00:30:03.948055 containerd[1465]: time="2025-05-13T00:30:03.947974584Z" level=info msg="StartContainer for \"03504754b9ef2e6d826a15eee0f744314c335815374ac7d1258c0bed0a81c06f\"" May 13 00:30:03.965011 containerd[1465]: time="2025-05-13T00:30:03.964933975Z" level=info msg="CreateContainer within sandbox \"be90e0cb810e3ae4b0ea4b7a3c7edcfe9e1cebcfae2e4d745b69fe976bc8d6a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"64d304221188f2f8a8a6d746fedf3616956e4043680d55bf5bd617dba61767d7\"" May 13 00:30:03.965751 containerd[1465]: time="2025-05-13T00:30:03.965717425Z" level=info msg="StartContainer for \"64d304221188f2f8a8a6d746fedf3616956e4043680d55bf5bd617dba61767d7\"" May 13 00:30:03.965879 containerd[1465]: time="2025-05-13T00:30:03.965846717Z" level=info msg="CreateContainer within sandbox \"c5c04881a6529a4732ed39632636d00371b3e89882003b1a458ee88755fbccb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a2599ff6c0168c2401b36202b5c88a88a959d87aae12ce8c3cf5a81015fff74\"" May 13 00:30:03.966513 containerd[1465]: time="2025-05-13T00:30:03.966281763Z" level=info msg="StartContainer for \"8a2599ff6c0168c2401b36202b5c88a88a959d87aae12ce8c3cf5a81015fff74\"" May 13 00:30:03.981981 systemd[1]: Started cri-containerd-03504754b9ef2e6d826a15eee0f744314c335815374ac7d1258c0bed0a81c06f.scope - libcontainer container 03504754b9ef2e6d826a15eee0f744314c335815374ac7d1258c0bed0a81c06f. May 13 00:30:04.000855 systemd[1]: Started cri-containerd-64d304221188f2f8a8a6d746fedf3616956e4043680d55bf5bd617dba61767d7.scope - libcontainer container 64d304221188f2f8a8a6d746fedf3616956e4043680d55bf5bd617dba61767d7. May 13 00:30:04.004675 systemd[1]: Started cri-containerd-8a2599ff6c0168c2401b36202b5c88a88a959d87aae12ce8c3cf5a81015fff74.scope - libcontainer container 8a2599ff6c0168c2401b36202b5c88a88a959d87aae12ce8c3cf5a81015fff74. May 13 00:30:04.044925 containerd[1465]: time="2025-05-13T00:30:04.044633328Z" level=info msg="StartContainer for \"03504754b9ef2e6d826a15eee0f744314c335815374ac7d1258c0bed0a81c06f\" returns successfully" May 13 00:30:04.053097 containerd[1465]: time="2025-05-13T00:30:04.053057610Z" level=info msg="StartContainer for \"8a2599ff6c0168c2401b36202b5c88a88a959d87aae12ce8c3cf5a81015fff74\" returns successfully" May 13 00:30:04.062059 containerd[1465]: time="2025-05-13T00:30:04.062000815Z" level=info msg="StartContainer for \"64d304221188f2f8a8a6d746fedf3616956e4043680d55bf5bd617dba61767d7\" returns successfully" May 13 00:30:04.376453 kubelet[2233]: E0513 00:30:04.376384 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.379522 kubelet[2233]: E0513 00:30:04.379482 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.381778 kubelet[2233]: E0513 00:30:04.381749 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.460200 kubelet[2233]: I0513 00:30:04.460133 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:05.245511 kubelet[2233]: E0513 00:30:05.245469 2233 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:30:05.339725 kubelet[2233]: I0513 00:30:05.339654 2233 apiserver.go:52] "Watching apiserver" May 13 00:30:05.349478 kubelet[2233]: I0513 00:30:05.349460 2233 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:30:05.377058 kubelet[2233]: I0513 00:30:05.377015 2233 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:30:05.399909 kubelet[2233]: E0513 00:30:05.399804 2233 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183eeebd15753bbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:30:01.342565311 +0000 UTC m=+0.303221297,LastTimestamp:2025-05-13 00:30:01.342565311 +0000 UTC m=+0.303221297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:30:05.437422 kubelet[2233]: E0513 00:30:05.437366 2233 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:30:05.438189 kubelet[2233]: E0513 00:30:05.438160 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:06.203251 kubelet[2233]: E0513 00:30:06.203208 2233 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:30:06.203584 kubelet[2233]: E0513 00:30:06.203554 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:07.281395 kubelet[2233]: E0513 00:30:07.280251 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:07.385495 kubelet[2233]: E0513 00:30:07.385454 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:07.467788 systemd[1]: Reloading requested from client PID 2517 ('systemctl') (unit session-7.scope)... May 13 00:30:07.467805 systemd[1]: Reloading... May 13 00:30:07.546741 zram_generator::config[2556]: No configuration found. May 13 00:30:07.658806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:30:07.749621 systemd[1]: Reloading finished in 281 ms. May 13 00:30:07.798684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:07.818269 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:30:07.818562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:07.833015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:30:07.984848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:30:07.990820 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:30:08.038398 kubelet[2601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:08.038398 kubelet[2601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:30:08.038398 kubelet[2601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:30:08.038869 kubelet[2601]: I0513 00:30:08.038440 2601 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:30:08.043045 kubelet[2601]: I0513 00:30:08.043011 2601 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:30:08.043045 kubelet[2601]: I0513 00:30:08.043039 2601 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:30:08.043328 kubelet[2601]: I0513 00:30:08.043287 2601 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:30:08.044575 kubelet[2601]: I0513 00:30:08.044553 2601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:30:08.045771 kubelet[2601]: I0513 00:30:08.045742 2601 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:30:08.055039 kubelet[2601]: I0513 00:30:08.054912 2601 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:30:08.055218 kubelet[2601]: I0513 00:30:08.055177 2601 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:30:08.055470 kubelet[2601]: I0513 00:30:08.055215 2601 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:30:08.055605 kubelet[2601]: I0513 00:30:08.055475 2601 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:30:08.055605 kubelet[2601]: I0513 00:30:08.055489 2601 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:30:08.055605 kubelet[2601]: I0513 00:30:08.055556 2601 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:08.055696 kubelet[2601]: I0513 00:30:08.055678 2601 kubelet.go:400] "Attempting to sync node with API server" May 13 00:30:08.055757 kubelet[2601]: I0513 00:30:08.055719 2601 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:30:08.055757 kubelet[2601]: I0513 00:30:08.055748 2601 kubelet.go:312] "Adding apiserver pod source" May 13 00:30:08.055827 kubelet[2601]: I0513 00:30:08.055774 2601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:30:08.057363 kubelet[2601]: I0513 00:30:08.057001 2601 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:30:08.057363 kubelet[2601]: I0513 00:30:08.057225 2601 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:30:08.057828 kubelet[2601]: I0513 00:30:08.057761 2601 server.go:1264] "Started kubelet" May 13 00:30:08.058132 kubelet[2601]: I0513 00:30:08.058086 2601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:30:08.059144 kubelet[2601]: I0513 00:30:08.059126 2601 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:30:08.059535 kubelet[2601]: I0513 00:30:08.059196 2601 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:30:08.060210 kubelet[2601]: I0513 00:30:08.060175 2601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:30:08.060875 kubelet[2601]: I0513 00:30:08.060860 2601 server.go:455] "Adding debug handlers to kubelet server" May 13 00:30:08.066431 kubelet[2601]: E0513 00:30:08.066056 2601 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:30:08.066431 kubelet[2601]: I0513 00:30:08.066156 2601 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:30:08.066431 kubelet[2601]: I0513 00:30:08.066662 2601 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:30:08.066431 kubelet[2601]: I0513 00:30:08.066832 2601 reconciler.go:26] "Reconciler: start to sync state" May 13 00:30:08.069886 kubelet[2601]: E0513 00:30:08.069854 2601 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:30:08.070550 kubelet[2601]: I0513 00:30:08.070533 2601 factory.go:221] Registration of the systemd container factory successfully May 13 00:30:08.070690 kubelet[2601]: I0513 00:30:08.070673 2601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:30:08.072849 kubelet[2601]: I0513 00:30:08.072810 2601 factory.go:221] Registration of the containerd container factory successfully May 13 00:30:08.075912 kubelet[2601]: I0513 00:30:08.075875 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:30:08.077306 kubelet[2601]: I0513 00:30:08.077280 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:30:08.077360 kubelet[2601]: I0513 00:30:08.077320 2601 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:30:08.077360 kubelet[2601]: I0513 00:30:08.077346 2601 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:30:08.077437 kubelet[2601]: E0513 00:30:08.077398 2601 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:30:08.103401 kubelet[2601]: I0513 00:30:08.103350 2601 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:30:08.103401 kubelet[2601]: I0513 00:30:08.103368 2601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:30:08.103401 kubelet[2601]: I0513 00:30:08.103388 2601 state_mem.go:36] "Initialized new in-memory state store" May 13 00:30:08.103576 kubelet[2601]: I0513 00:30:08.103534 2601 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:30:08.103576 kubelet[2601]: I0513 00:30:08.103543 2601 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:30:08.103576 kubelet[2601]: I0513 00:30:08.103561 2601 policy_none.go:49] "None policy: Start" May 13 00:30:08.104110 kubelet[2601]: I0513 00:30:08.104082 2601 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:30:08.104110 kubelet[2601]: I0513 00:30:08.104111 2601 state_mem.go:35] "Initializing new in-memory state store" May 13 00:30:08.104272 kubelet[2601]: I0513 00:30:08.104251 2601 state_mem.go:75] "Updated machine memory state" May 13 00:30:08.108298 kubelet[2601]: I0513 00:30:08.108264 2601 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:30:08.108481 kubelet[2601]: I0513 00:30:08.108445 2601 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:30:08.108587 kubelet[2601]: I0513 00:30:08.108567 2601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:30:08.173102 kubelet[2601]: I0513 00:30:08.173068 2601 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:30:08.177742 kubelet[2601]: I0513 00:30:08.177670 2601 topology_manager.go:215] "Topology Admit Handler" podUID="0aa3307148eab2da27045a270be6aa54" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:30:08.177911 kubelet[2601]: I0513 00:30:08.177768 2601 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:30:08.177911 kubelet[2601]: I0513 00:30:08.177837 2601 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:30:08.179767 kubelet[2601]: I0513 00:30:08.179329 2601 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:30:08.179767 kubelet[2601]: I0513 00:30:08.179395 2601 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:30:08.184426 kubelet[2601]: E0513 00:30:08.184379 2601 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:30:08.367472 kubelet[2601]: I0513 00:30:08.367413 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:08.367472 kubelet[2601]: I0513 00:30:08.367461 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:08.367472 kubelet[2601]: I0513 00:30:08.367483 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:30:08.367691 kubelet[2601]: I0513 00:30:08.367499 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:08.367691 kubelet[2601]: I0513 00:30:08.367516 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:08.367691 kubelet[2601]: I0513 00:30:08.367532 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:08.367691 kubelet[2601]: I0513 00:30:08.367548 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:08.367691 kubelet[2601]: I0513 00:30:08.367564 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aa3307148eab2da27045a270be6aa54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0aa3307148eab2da27045a270be6aa54\") " pod="kube-system/kube-apiserver-localhost" May 13 00:30:08.367832 kubelet[2601]: I0513 00:30:08.367578 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:30:08.486001 kubelet[2601]: E0513 00:30:08.485966 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:08.486228 kubelet[2601]: E0513 00:30:08.486032 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:08.486228 kubelet[2601]: E0513 00:30:08.486039 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:09.056309 kubelet[2601]: I0513 00:30:09.056254 2601 apiserver.go:52] "Watching apiserver" May 13 00:30:09.067820 kubelet[2601]: I0513 00:30:09.067780 2601 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:30:09.087747 kubelet[2601]: E0513 00:30:09.087154 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:09.089651 kubelet[2601]: E0513 00:30:09.089621 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:09.144267 kubelet[2601]: E0513 00:30:09.144221 2601 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:30:09.144680 kubelet[2601]: E0513 00:30:09.144658 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:09.358202 kubelet[2601]: I0513 00:30:09.357493 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.357478376 podStartE2EDuration="1.357478376s" podCreationTimestamp="2025-05-13 00:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:09.357269073 +0000 UTC m=+1.360913194" watchObservedRunningTime="2025-05-13 00:30:09.357478376 +0000 UTC m=+1.361122497" May 13 00:30:09.416588 kubelet[2601]: I0513 00:30:09.416199 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.416180994 podStartE2EDuration="1.416180994s" podCreationTimestamp="2025-05-13 00:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:09.406272757 +0000 UTC m=+1.409916878" watchObservedRunningTime="2025-05-13 00:30:09.416180994 +0000 UTC m=+1.419825115" May 13 00:30:09.424439 kubelet[2601]: I0513 00:30:09.424379 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.42431599 podStartE2EDuration="2.42431599s" podCreationTimestamp="2025-05-13 00:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:09.417105502 +0000 UTC m=+1.420749653" watchObservedRunningTime="2025-05-13 00:30:09.42431599 +0000 UTC m=+1.427960111" May 13 00:30:10.088435 kubelet[2601]: E0513 00:30:10.088387 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:12.715895 sudo[1645]: pam_unix(sudo:session): session closed for user root May 13 00:30:12.717934 sshd[1642]: pam_unix(sshd:session): session closed for user core May 13 00:30:12.722676 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:34926.service: Deactivated successfully. May 13 00:30:12.724611 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:30:12.724820 systemd[1]: session-7.scope: Consumed 5.197s CPU time, 191.9M memory peak, 0B memory swap peak. May 13 00:30:12.725302 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. May 13 00:30:12.726118 systemd-logind[1449]: Removed session 7. May 13 00:30:13.769495 kubelet[2601]: E0513 00:30:13.769424 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:14.550048 kubelet[2601]: E0513 00:30:14.549999 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:15.095503 kubelet[2601]: E0513 00:30:15.095467 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:17.540729 kubelet[2601]: E0513 00:30:17.540613 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:18.099124 kubelet[2601]: E0513 00:30:18.099096 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:20.498820 update_engine[1452]: I20250513 00:30:20.498735 1452 update_attempter.cc:509] Updating boot flags... May 13 00:30:20.523851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2696) May 13 00:30:20.570063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2698) May 13 00:30:20.595736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2698) May 13 00:30:21.995806 kubelet[2601]: I0513 00:30:21.995769 2601 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:30:21.996284 containerd[1465]: time="2025-05-13T00:30:21.996244751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:30:21.996557 kubelet[2601]: I0513 00:30:21.996454 2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:30:22.851865 kubelet[2601]: I0513 00:30:22.851815 2601 topology_manager.go:215] "Topology Admit Handler" podUID="27958ae8-65a7-455b-b800-b8931bdac7ad" podNamespace="kube-system" podName="kube-proxy-lcrz6" May 13 00:30:22.853687 kubelet[2601]: I0513 00:30:22.853607 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zwp\" (UniqueName: \"kubernetes.io/projected/27958ae8-65a7-455b-b800-b8931bdac7ad-kube-api-access-q9zwp\") pod \"kube-proxy-lcrz6\" (UID: \"27958ae8-65a7-455b-b800-b8931bdac7ad\") " pod="kube-system/kube-proxy-lcrz6" May 13 00:30:22.853687 kubelet[2601]: I0513 00:30:22.853650 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27958ae8-65a7-455b-b800-b8931bdac7ad-xtables-lock\") pod \"kube-proxy-lcrz6\" (UID: \"27958ae8-65a7-455b-b800-b8931bdac7ad\") " pod="kube-system/kube-proxy-lcrz6" May 13 00:30:22.853803 kubelet[2601]: I0513 00:30:22.853714 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27958ae8-65a7-455b-b800-b8931bdac7ad-kube-proxy\") pod \"kube-proxy-lcrz6\" (UID: \"27958ae8-65a7-455b-b800-b8931bdac7ad\") " pod="kube-system/kube-proxy-lcrz6" May 13 00:30:22.853803 kubelet[2601]: I0513 00:30:22.853733 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27958ae8-65a7-455b-b800-b8931bdac7ad-lib-modules\") pod \"kube-proxy-lcrz6\" (UID: \"27958ae8-65a7-455b-b800-b8931bdac7ad\") " pod="kube-system/kube-proxy-lcrz6" May 13 00:30:22.858377 systemd[1]: Created slice kubepods-besteffort-pod27958ae8_65a7_455b_b800_b8931bdac7ad.slice - libcontainer container kubepods-besteffort-pod27958ae8_65a7_455b_b800_b8931bdac7ad.slice. May 13 00:30:23.076242 kubelet[2601]: I0513 00:30:23.076195 2601 topology_manager.go:215] "Topology Admit Handler" podUID="3d574ef5-2b1b-4415-8edf-27933fd3ca37" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-trl7t" May 13 00:30:23.085148 systemd[1]: Created slice kubepods-besteffort-pod3d574ef5_2b1b_4415_8edf_27933fd3ca37.slice - libcontainer container kubepods-besteffort-pod3d574ef5_2b1b_4415_8edf_27933fd3ca37.slice. May 13 00:30:23.155888 kubelet[2601]: I0513 00:30:23.155753 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gct\" (UniqueName: \"kubernetes.io/projected/3d574ef5-2b1b-4415-8edf-27933fd3ca37-kube-api-access-88gct\") pod \"tigera-operator-797db67f8-trl7t\" (UID: \"3d574ef5-2b1b-4415-8edf-27933fd3ca37\") " pod="tigera-operator/tigera-operator-797db67f8-trl7t" May 13 00:30:23.155888 kubelet[2601]: I0513 00:30:23.155790 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d574ef5-2b1b-4415-8edf-27933fd3ca37-var-lib-calico\") pod \"tigera-operator-797db67f8-trl7t\" (UID: \"3d574ef5-2b1b-4415-8edf-27933fd3ca37\") " pod="tigera-operator/tigera-operator-797db67f8-trl7t" May 13 00:30:23.165910 kubelet[2601]: E0513 00:30:23.165875 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:23.166348 containerd[1465]: time="2025-05-13T00:30:23.166300349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcrz6,Uid:27958ae8-65a7-455b-b800-b8931bdac7ad,Namespace:kube-system,Attempt:0,}" May 13 00:30:23.191466 containerd[1465]: time="2025-05-13T00:30:23.191386628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:23.191466 containerd[1465]: time="2025-05-13T00:30:23.191429449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:23.191466 containerd[1465]: time="2025-05-13T00:30:23.191439358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:23.191676 containerd[1465]: time="2025-05-13T00:30:23.191508458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:23.216849 systemd[1]: Started cri-containerd-ad62e3f69dddbc10aa659ac5ed8112af26fc1b461a21c7a8ae35f51a9b88f013.scope - libcontainer container ad62e3f69dddbc10aa659ac5ed8112af26fc1b461a21c7a8ae35f51a9b88f013. May 13 00:30:23.240195 containerd[1465]: time="2025-05-13T00:30:23.240117674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcrz6,Uid:27958ae8-65a7-455b-b800-b8931bdac7ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad62e3f69dddbc10aa659ac5ed8112af26fc1b461a21c7a8ae35f51a9b88f013\"" May 13 00:30:23.241046 kubelet[2601]: E0513 00:30:23.241017 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:23.243275 containerd[1465]: time="2025-05-13T00:30:23.243234691Z" level=info msg="CreateContainer within sandbox \"ad62e3f69dddbc10aa659ac5ed8112af26fc1b461a21c7a8ae35f51a9b88f013\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:30:23.289162 containerd[1465]: time="2025-05-13T00:30:23.289111619Z" level=info msg="CreateContainer within sandbox \"ad62e3f69dddbc10aa659ac5ed8112af26fc1b461a21c7a8ae35f51a9b88f013\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89f897a9fc8cdbfa067e9f6d790f1a3b74ce36fb8d17faa557e610c2a4cebf37\"" May 13 00:30:23.289790 containerd[1465]: time="2025-05-13T00:30:23.289696397Z" level=info msg="StartContainer for \"89f897a9fc8cdbfa067e9f6d790f1a3b74ce36fb8d17faa557e610c2a4cebf37\"" May 13 00:30:23.322850 systemd[1]: Started cri-containerd-89f897a9fc8cdbfa067e9f6d790f1a3b74ce36fb8d17faa557e610c2a4cebf37.scope - libcontainer container 89f897a9fc8cdbfa067e9f6d790f1a3b74ce36fb8d17faa557e610c2a4cebf37. May 13 00:30:23.354400 containerd[1465]: time="2025-05-13T00:30:23.354351450Z" level=info msg="StartContainer for \"89f897a9fc8cdbfa067e9f6d790f1a3b74ce36fb8d17faa557e610c2a4cebf37\" returns successfully" May 13 00:30:23.389413 containerd[1465]: time="2025-05-13T00:30:23.389317262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-trl7t,Uid:3d574ef5-2b1b-4415-8edf-27933fd3ca37,Namespace:tigera-operator,Attempt:0,}" May 13 00:30:23.421449 containerd[1465]: time="2025-05-13T00:30:23.420502608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:23.421449 containerd[1465]: time="2025-05-13T00:30:23.420603991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:23.421449 containerd[1465]: time="2025-05-13T00:30:23.420621014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:23.421449 containerd[1465]: time="2025-05-13T00:30:23.420778211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:23.443842 systemd[1]: Started cri-containerd-273581ddfaa3769582acf94e76a80118cc83087d59e57e0905cf68ba54f7604b.scope - libcontainer container 273581ddfaa3769582acf94e76a80118cc83087d59e57e0905cf68ba54f7604b. May 13 00:30:23.477892 containerd[1465]: time="2025-05-13T00:30:23.477852088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-trl7t,Uid:3d574ef5-2b1b-4415-8edf-27933fd3ca37,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"273581ddfaa3769582acf94e76a80118cc83087d59e57e0905cf68ba54f7604b\"" May 13 00:30:23.480307 containerd[1465]: time="2025-05-13T00:30:23.480266463Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:30:23.773301 kubelet[2601]: E0513 00:30:23.773185 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:24.109261 kubelet[2601]: E0513 00:30:24.109212 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:25.414305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119994182.mount: Deactivated successfully. May 13 00:30:26.890769 containerd[1465]: time="2025-05-13T00:30:26.890684509Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:26.891781 containerd[1465]: time="2025-05-13T00:30:26.891730908Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 00:30:26.895505 containerd[1465]: time="2025-05-13T00:30:26.895449904Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:26.897806 containerd[1465]: time="2025-05-13T00:30:26.897770664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:26.898523 containerd[1465]: time="2025-05-13T00:30:26.898458124Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.418125606s" May 13 00:30:26.898523 containerd[1465]: time="2025-05-13T00:30:26.898506455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 00:30:26.900539 containerd[1465]: time="2025-05-13T00:30:26.900502931Z" level=info msg="CreateContainer within sandbox \"273581ddfaa3769582acf94e76a80118cc83087d59e57e0905cf68ba54f7604b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:30:26.914363 containerd[1465]: time="2025-05-13T00:30:26.914290753Z" level=info msg="CreateContainer within sandbox \"273581ddfaa3769582acf94e76a80118cc83087d59e57e0905cf68ba54f7604b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"218c41b60476991b348f42f9e7fb67aa6beca191fc30eb3f4f101d488c138c43\"" May 13 00:30:26.915478 containerd[1465]: time="2025-05-13T00:30:26.914897431Z" level=info msg="StartContainer for \"218c41b60476991b348f42f9e7fb67aa6beca191fc30eb3f4f101d488c138c43\"" May 13 00:30:26.946845 systemd[1]: Started cri-containerd-218c41b60476991b348f42f9e7fb67aa6beca191fc30eb3f4f101d488c138c43.scope - libcontainer container 218c41b60476991b348f42f9e7fb67aa6beca191fc30eb3f4f101d488c138c43. May 13 00:30:27.152686 containerd[1465]: time="2025-05-13T00:30:27.152540583Z" level=info msg="StartContainer for \"218c41b60476991b348f42f9e7fb67aa6beca191fc30eb3f4f101d488c138c43\" returns successfully" May 13 00:30:27.203429 kubelet[2601]: I0513 00:30:27.203336 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcrz6" podStartSLOduration=5.203309758 podStartE2EDuration="5.203309758s" podCreationTimestamp="2025-05-13 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:24.116605625 +0000 UTC m=+16.120249746" watchObservedRunningTime="2025-05-13 00:30:27.203309758 +0000 UTC m=+19.206954009" May 13 00:30:29.829834 kubelet[2601]: I0513 00:30:29.829772 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-trl7t" podStartSLOduration=3.409453127 podStartE2EDuration="6.829752369s" podCreationTimestamp="2025-05-13 00:30:23 +0000 UTC" firstStartedPulling="2025-05-13 00:30:23.479026634 +0000 UTC m=+15.482670755" lastFinishedPulling="2025-05-13 00:30:26.899325886 +0000 UTC m=+18.902969997" observedRunningTime="2025-05-13 00:30:27.203958324 +0000 UTC m=+19.207602445" watchObservedRunningTime="2025-05-13 00:30:29.829752369 +0000 UTC m=+21.833396490" May 13 00:30:29.830316 kubelet[2601]: I0513 00:30:29.829908 2601 topology_manager.go:215] "Topology Admit Handler" podUID="82132b3c-fba1-4944-a5de-128051683a06" podNamespace="calico-system" podName="calico-typha-5bd4ccff46-rtfkm" May 13 00:30:29.841688 systemd[1]: Created slice kubepods-besteffort-pod82132b3c_fba1_4944_a5de_128051683a06.slice - libcontainer container kubepods-besteffort-pod82132b3c_fba1_4944_a5de_128051683a06.slice. May 13 00:30:29.878229 kubelet[2601]: I0513 00:30:29.878184 2601 topology_manager.go:215] "Topology Admit Handler" podUID="7a20e7d4-a5b6-4364-a912-40ae8b8a93f4" podNamespace="calico-system" podName="calico-node-8rlsp" May 13 00:30:29.885554 systemd[1]: Created slice kubepods-besteffort-pod7a20e7d4_a5b6_4364_a912_40ae8b8a93f4.slice - libcontainer container kubepods-besteffort-pod7a20e7d4_a5b6_4364_a912_40ae8b8a93f4.slice. May 13 00:30:29.900225 kubelet[2601]: I0513 00:30:29.900177 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-flexvol-driver-host\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900225 kubelet[2601]: I0513 00:30:29.900224 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-lib-modules\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900388 kubelet[2601]: I0513 00:30:29.900243 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-var-run-calico\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900388 kubelet[2601]: I0513 00:30:29.900265 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbqk\" (UniqueName: \"kubernetes.io/projected/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-kube-api-access-8vbqk\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900388 kubelet[2601]: I0513 00:30:29.900280 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-cni-bin-dir\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900388 kubelet[2601]: I0513 00:30:29.900296 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-cni-log-dir\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900388 kubelet[2601]: I0513 00:30:29.900312 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-node-certs\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900506 kubelet[2601]: I0513 00:30:29.900326 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-var-lib-calico\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900506 kubelet[2601]: I0513 00:30:29.900339 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-cni-net-dir\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900506 kubelet[2601]: I0513 00:30:29.900354 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-xtables-lock\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900506 kubelet[2601]: I0513 00:30:29.900367 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-policysync\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.900506 kubelet[2601]: I0513 00:30:29.900382 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82132b3c-fba1-4944-a5de-128051683a06-tigera-ca-bundle\") pod \"calico-typha-5bd4ccff46-rtfkm\" (UID: \"82132b3c-fba1-4944-a5de-128051683a06\") " pod="calico-system/calico-typha-5bd4ccff46-rtfkm" May 13 00:30:29.900634 kubelet[2601]: I0513 00:30:29.900397 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q6b\" (UniqueName: \"kubernetes.io/projected/82132b3c-fba1-4944-a5de-128051683a06-kube-api-access-96q6b\") pod \"calico-typha-5bd4ccff46-rtfkm\" (UID: \"82132b3c-fba1-4944-a5de-128051683a06\") " pod="calico-system/calico-typha-5bd4ccff46-rtfkm" May 13 00:30:29.900634 kubelet[2601]: I0513 00:30:29.900412 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/82132b3c-fba1-4944-a5de-128051683a06-typha-certs\") pod \"calico-typha-5bd4ccff46-rtfkm\" (UID: \"82132b3c-fba1-4944-a5de-128051683a06\") " pod="calico-system/calico-typha-5bd4ccff46-rtfkm" May 13 00:30:29.900634 kubelet[2601]: I0513 00:30:29.900425 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a20e7d4-a5b6-4364-a912-40ae8b8a93f4-tigera-ca-bundle\") pod \"calico-node-8rlsp\" (UID: \"7a20e7d4-a5b6-4364-a912-40ae8b8a93f4\") " pod="calico-system/calico-node-8rlsp" May 13 00:30:29.986370 kubelet[2601]: I0513 00:30:29.986151 2601 topology_manager.go:215] "Topology Admit Handler" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" podNamespace="calico-system" podName="csi-node-driver-b9vdg" May 13 00:30:29.986986 kubelet[2601]: E0513 00:30:29.986956 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:30.001452 kubelet[2601]: I0513 00:30:30.001385 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4c6d9d-b177-405f-84c6-f30031c5dd17-kubelet-dir\") pod \"csi-node-driver-b9vdg\" (UID: \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\") " pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:30.001452 kubelet[2601]: I0513 00:30:30.001450 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae4c6d9d-b177-405f-84c6-f30031c5dd17-socket-dir\") pod \"csi-node-driver-b9vdg\" (UID: \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\") " pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:30.001589 kubelet[2601]: I0513 00:30:30.001511 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae4c6d9d-b177-405f-84c6-f30031c5dd17-varrun\") pod \"csi-node-driver-b9vdg\" (UID: \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\") " pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:30.001589 kubelet[2601]: I0513 00:30:30.001543 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae4c6d9d-b177-405f-84c6-f30031c5dd17-registration-dir\") pod \"csi-node-driver-b9vdg\" (UID: \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\") " pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:30.001589 kubelet[2601]: I0513 00:30:30.001580 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9b4\" (UniqueName: \"kubernetes.io/projected/ae4c6d9d-b177-405f-84c6-f30031c5dd17-kube-api-access-tk9b4\") pod \"csi-node-driver-b9vdg\" (UID: \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\") " pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:30.004694 kubelet[2601]: E0513 00:30:30.004662 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.004694 kubelet[2601]: W0513 00:30:30.004695 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.004987 kubelet[2601]: E0513 00:30:30.004733 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.005031 kubelet[2601]: E0513 00:30:30.005013 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.005031 kubelet[2601]: W0513 00:30:30.005024 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.005077 kubelet[2601]: E0513 00:30:30.005039 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.005288 kubelet[2601]: E0513 00:30:30.005273 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.005288 kubelet[2601]: W0513 00:30:30.005285 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.005373 kubelet[2601]: E0513 00:30:30.005304 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.010816 kubelet[2601]: E0513 00:30:30.010778 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.010816 kubelet[2601]: W0513 00:30:30.010803 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.011078 kubelet[2601]: E0513 00:30:30.010826 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.011120 kubelet[2601]: E0513 00:30:30.011107 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.011147 kubelet[2601]: W0513 00:30:30.011119 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.011147 kubelet[2601]: E0513 00:30:30.011138 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.011398 kubelet[2601]: E0513 00:30:30.011375 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.011398 kubelet[2601]: W0513 00:30:30.011391 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.011458 kubelet[2601]: E0513 00:30:30.011402 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.011748 kubelet[2601]: E0513 00:30:30.011726 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.011748 kubelet[2601]: W0513 00:30:30.011741 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.011821 kubelet[2601]: E0513 00:30:30.011756 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.014502 kubelet[2601]: E0513 00:30:30.013907 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.014502 kubelet[2601]: W0513 00:30:30.013926 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.014502 kubelet[2601]: E0513 00:30:30.013939 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.020431 kubelet[2601]: E0513 00:30:30.020386 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.020431 kubelet[2601]: W0513 00:30:30.020412 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.020431 kubelet[2601]: E0513 00:30:30.020432 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.020731 kubelet[2601]: E0513 00:30:30.020690 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.020731 kubelet[2601]: W0513 00:30:30.020726 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.020791 kubelet[2601]: E0513 00:30:30.020736 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.102473 kubelet[2601]: E0513 00:30:30.102443 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.102473 kubelet[2601]: W0513 00:30:30.102462 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.102622 kubelet[2601]: E0513 00:30:30.102479 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.102776 kubelet[2601]: E0513 00:30:30.102759 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.102776 kubelet[2601]: W0513 00:30:30.102770 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.102831 kubelet[2601]: E0513 00:30:30.102783 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.103080 kubelet[2601]: E0513 00:30:30.103046 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.103080 kubelet[2601]: W0513 00:30:30.103071 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.103135 kubelet[2601]: E0513 00:30:30.103096 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.103367 kubelet[2601]: E0513 00:30:30.103340 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.103367 kubelet[2601]: W0513 00:30:30.103352 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.103367 kubelet[2601]: E0513 00:30:30.103365 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.103592 kubelet[2601]: E0513 00:30:30.103575 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.103592 kubelet[2601]: W0513 00:30:30.103585 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.103659 kubelet[2601]: E0513 00:30:30.103598 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.103941 kubelet[2601]: E0513 00:30:30.103908 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.103941 kubelet[2601]: W0513 00:30:30.103930 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.104005 kubelet[2601]: E0513 00:30:30.103957 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.104188 kubelet[2601]: E0513 00:30:30.104174 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.104188 kubelet[2601]: W0513 00:30:30.104185 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.104300 kubelet[2601]: E0513 00:30:30.104275 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.104439 kubelet[2601]: E0513 00:30:30.104422 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.104439 kubelet[2601]: W0513 00:30:30.104434 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.104536 kubelet[2601]: E0513 00:30:30.104517 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.104681 kubelet[2601]: E0513 00:30:30.104661 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.104681 kubelet[2601]: W0513 00:30:30.104673 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.104835 kubelet[2601]: E0513 00:30:30.104689 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.104944 kubelet[2601]: E0513 00:30:30.104925 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.104974 kubelet[2601]: W0513 00:30:30.104942 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.104974 kubelet[2601]: E0513 00:30:30.104960 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.105201 kubelet[2601]: E0513 00:30:30.105185 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.105201 kubelet[2601]: W0513 00:30:30.105199 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.105272 kubelet[2601]: E0513 00:30:30.105214 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.105464 kubelet[2601]: E0513 00:30:30.105438 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.105464 kubelet[2601]: W0513 00:30:30.105461 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.105541 kubelet[2601]: E0513 00:30:30.105504 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.105742 kubelet[2601]: E0513 00:30:30.105723 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.105742 kubelet[2601]: W0513 00:30:30.105736 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.105813 kubelet[2601]: E0513 00:30:30.105776 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.106031 kubelet[2601]: E0513 00:30:30.106012 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.106031 kubelet[2601]: W0513 00:30:30.106024 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.106099 kubelet[2601]: E0513 00:30:30.106057 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.106294 kubelet[2601]: E0513 00:30:30.106276 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.106294 kubelet[2601]: W0513 00:30:30.106288 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.106361 kubelet[2601]: E0513 00:30:30.106321 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.106543 kubelet[2601]: E0513 00:30:30.106525 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.106543 kubelet[2601]: W0513 00:30:30.106537 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.106618 kubelet[2601]: E0513 00:30:30.106551 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.106818 kubelet[2601]: E0513 00:30:30.106792 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.106818 kubelet[2601]: W0513 00:30:30.106805 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.106818 kubelet[2601]: E0513 00:30:30.106823 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.107048 kubelet[2601]: E0513 00:30:30.106994 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.107048 kubelet[2601]: W0513 00:30:30.107001 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.107048 kubelet[2601]: E0513 00:30:30.107009 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.107175 kubelet[2601]: E0513 00:30:30.107158 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.107175 kubelet[2601]: W0513 00:30:30.107169 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.107233 kubelet[2601]: E0513 00:30:30.107178 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.107749 kubelet[2601]: E0513 00:30:30.107729 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.107749 kubelet[2601]: W0513 00:30:30.107746 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.107827 kubelet[2601]: E0513 00:30:30.107773 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.107998 kubelet[2601]: E0513 00:30:30.107986 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.107998 kubelet[2601]: W0513 00:30:30.107996 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.108075 kubelet[2601]: E0513 00:30:30.108029 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.108210 kubelet[2601]: E0513 00:30:30.108198 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.108210 kubelet[2601]: W0513 00:30:30.108208 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.108275 kubelet[2601]: E0513 00:30:30.108238 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.108448 kubelet[2601]: E0513 00:30:30.108428 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.108448 kubelet[2601]: W0513 00:30:30.108441 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.108554 kubelet[2601]: E0513 00:30:30.108456 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.108695 kubelet[2601]: E0513 00:30:30.108676 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.108695 kubelet[2601]: W0513 00:30:30.108688 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.108695 kubelet[2601]: E0513 00:30:30.108732 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.109020 kubelet[2601]: E0513 00:30:30.109004 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.109020 kubelet[2601]: W0513 00:30:30.109019 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.109075 kubelet[2601]: E0513 00:30:30.109029 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.114282 kubelet[2601]: E0513 00:30:30.114244 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:30.114282 kubelet[2601]: W0513 00:30:30.114270 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:30.114282 kubelet[2601]: E0513 00:30:30.114281 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:30.145598 kubelet[2601]: E0513 00:30:30.145575 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:30.146130 containerd[1465]: time="2025-05-13T00:30:30.146072173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd4ccff46-rtfkm,Uid:82132b3c-fba1-4944-a5de-128051683a06,Namespace:calico-system,Attempt:0,}" May 13 00:30:30.169507 containerd[1465]: time="2025-05-13T00:30:30.169374598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:30.169507 containerd[1465]: time="2025-05-13T00:30:30.169435273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:30.169507 containerd[1465]: time="2025-05-13T00:30:30.169453578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:30.169947 containerd[1465]: time="2025-05-13T00:30:30.169575317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:30.189831 systemd[1]: Started cri-containerd-8bc1c0d46bbebc6c92cb95c435af017a68ad622ba350b31779d69c84352d770b.scope - libcontainer container 8bc1c0d46bbebc6c92cb95c435af017a68ad622ba350b31779d69c84352d770b. May 13 00:30:30.192590 kubelet[2601]: E0513 00:30:30.192539 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:30.193097 containerd[1465]: time="2025-05-13T00:30:30.193008179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rlsp,Uid:7a20e7d4-a5b6-4364-a912-40ae8b8a93f4,Namespace:calico-system,Attempt:0,}" May 13 00:30:30.215861 containerd[1465]: time="2025-05-13T00:30:30.215117351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:30.215861 containerd[1465]: time="2025-05-13T00:30:30.215669083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:30.215861 containerd[1465]: time="2025-05-13T00:30:30.215681426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:30.215861 containerd[1465]: time="2025-05-13T00:30:30.215775363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:30.235843 systemd[1]: Started cri-containerd-79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700.scope - libcontainer container 79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700. May 13 00:30:30.236581 containerd[1465]: time="2025-05-13T00:30:30.236538655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bd4ccff46-rtfkm,Uid:82132b3c-fba1-4944-a5de-128051683a06,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bc1c0d46bbebc6c92cb95c435af017a68ad622ba350b31779d69c84352d770b\"" May 13 00:30:30.237601 kubelet[2601]: E0513 00:30:30.237424 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:30.238515 containerd[1465]: time="2025-05-13T00:30:30.238371756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:30:30.261335 containerd[1465]: time="2025-05-13T00:30:30.261282712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8rlsp,Uid:7a20e7d4-a5b6-4364-a912-40ae8b8a93f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\"" May 13 00:30:30.262004 kubelet[2601]: E0513 00:30:30.261981 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:32.078565 kubelet[2601]: E0513 00:30:32.078502 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:32.737384 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:60392.service - OpenSSH per-connection server daemon (10.0.0.1:60392). May 13 00:30:32.781032 sshd[3147]: Accepted publickey for core from 10.0.0.1 port 60392 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:32.783106 sshd[3147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:32.789266 systemd-logind[1449]: New session 8 of user core. May 13 00:30:32.793906 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:30:32.915888 sshd[3147]: pam_unix(sshd:session): session closed for user core May 13 00:30:32.919929 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:60392.service: Deactivated successfully. May 13 00:30:32.922956 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:30:32.923613 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. May 13 00:30:32.924907 systemd-logind[1449]: Removed session 8. May 13 00:30:33.111259 containerd[1465]: time="2025-05-13T00:30:33.111215350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:33.112012 containerd[1465]: time="2025-05-13T00:30:33.111968800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 00:30:33.113260 containerd[1465]: time="2025-05-13T00:30:33.113183039Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:33.116161 containerd[1465]: time="2025-05-13T00:30:33.116122082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:33.116800 containerd[1465]: time="2025-05-13T00:30:33.116741531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.878338856s" May 13 00:30:33.116800 containerd[1465]: time="2025-05-13T00:30:33.116775034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 00:30:33.117692 containerd[1465]: time="2025-05-13T00:30:33.117565865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:30:33.126130 containerd[1465]: time="2025-05-13T00:30:33.125827195Z" level=info msg="CreateContainer within sandbox \"8bc1c0d46bbebc6c92cb95c435af017a68ad622ba350b31779d69c84352d770b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:30:33.142154 containerd[1465]: time="2025-05-13T00:30:33.142114770Z" level=info msg="CreateContainer within sandbox \"8bc1c0d46bbebc6c92cb95c435af017a68ad622ba350b31779d69c84352d770b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"741eac84c9478b75c72229ee5754b2e8417a2358a417a8a63062d5070e1d7b89\"" May 13 00:30:33.142755 containerd[1465]: time="2025-05-13T00:30:33.142477074Z" level=info msg="StartContainer for \"741eac84c9478b75c72229ee5754b2e8417a2358a417a8a63062d5070e1d7b89\"" May 13 00:30:33.172834 systemd[1]: Started cri-containerd-741eac84c9478b75c72229ee5754b2e8417a2358a417a8a63062d5070e1d7b89.scope - libcontainer container 741eac84c9478b75c72229ee5754b2e8417a2358a417a8a63062d5070e1d7b89. May 13 00:30:33.214127 containerd[1465]: time="2025-05-13T00:30:33.214082831Z" level=info msg="StartContainer for \"741eac84c9478b75c72229ee5754b2e8417a2358a417a8a63062d5070e1d7b89\" returns successfully" May 13 00:30:34.078566 kubelet[2601]: E0513 00:30:34.078498 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:34.175523 kubelet[2601]: E0513 00:30:34.175491 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:34.185464 kubelet[2601]: I0513 00:30:34.185394 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bd4ccff46-rtfkm" podStartSLOduration=2.306056253 podStartE2EDuration="5.185376822s" podCreationTimestamp="2025-05-13 00:30:29 +0000 UTC" firstStartedPulling="2025-05-13 00:30:30.238071679 +0000 UTC m=+22.241715800" lastFinishedPulling="2025-05-13 00:30:33.117392238 +0000 UTC m=+25.121036369" observedRunningTime="2025-05-13 00:30:34.184989582 +0000 UTC m=+26.188633723" watchObservedRunningTime="2025-05-13 00:30:34.185376822 +0000 UTC m=+26.189020943" May 13 00:30:34.218686 kubelet[2601]: E0513 00:30:34.218629 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.218686 kubelet[2601]: W0513 00:30:34.218655 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.218686 kubelet[2601]: E0513 00:30:34.218674 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.218950 kubelet[2601]: E0513 00:30:34.218922 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.218950 kubelet[2601]: W0513 00:30:34.218938 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.218950 kubelet[2601]: E0513 00:30:34.218947 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.219201 kubelet[2601]: E0513 00:30:34.219186 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.219201 kubelet[2601]: W0513 00:30:34.219197 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.219267 kubelet[2601]: E0513 00:30:34.219205 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.219473 kubelet[2601]: E0513 00:30:34.219442 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.219506 kubelet[2601]: W0513 00:30:34.219472 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.219506 kubelet[2601]: E0513 00:30:34.219500 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.219818 kubelet[2601]: E0513 00:30:34.219792 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.219818 kubelet[2601]: W0513 00:30:34.219806 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.219818 kubelet[2601]: E0513 00:30:34.219815 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.220118 kubelet[2601]: E0513 00:30:34.220092 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.220118 kubelet[2601]: W0513 00:30:34.220114 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.220208 kubelet[2601]: E0513 00:30:34.220136 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.220442 kubelet[2601]: E0513 00:30:34.220421 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.220442 kubelet[2601]: W0513 00:30:34.220433 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.220442 kubelet[2601]: E0513 00:30:34.220443 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.220645 kubelet[2601]: E0513 00:30:34.220628 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.220645 kubelet[2601]: W0513 00:30:34.220638 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.220754 kubelet[2601]: E0513 00:30:34.220649 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.220931 kubelet[2601]: E0513 00:30:34.220901 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.220931 kubelet[2601]: W0513 00:30:34.220914 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.220931 kubelet[2601]: E0513 00:30:34.220922 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.221150 kubelet[2601]: E0513 00:30:34.221132 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.221150 kubelet[2601]: W0513 00:30:34.221142 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.221236 kubelet[2601]: E0513 00:30:34.221160 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.221386 kubelet[2601]: E0513 00:30:34.221367 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.221386 kubelet[2601]: W0513 00:30:34.221378 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.221386 kubelet[2601]: E0513 00:30:34.221386 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.221588 kubelet[2601]: E0513 00:30:34.221571 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.221588 kubelet[2601]: W0513 00:30:34.221581 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.221588 kubelet[2601]: E0513 00:30:34.221589 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.221848 kubelet[2601]: E0513 00:30:34.221813 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.221848 kubelet[2601]: W0513 00:30:34.221824 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.221848 kubelet[2601]: E0513 00:30:34.221832 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.222054 kubelet[2601]: E0513 00:30:34.222038 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.222054 kubelet[2601]: W0513 00:30:34.222049 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.222117 kubelet[2601]: E0513 00:30:34.222057 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.222319 kubelet[2601]: E0513 00:30:34.222298 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.222319 kubelet[2601]: W0513 00:30:34.222309 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.222319 kubelet[2601]: E0513 00:30:34.222316 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.233902 kubelet[2601]: E0513 00:30:34.233851 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.233902 kubelet[2601]: W0513 00:30:34.233880 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.233902 kubelet[2601]: E0513 00:30:34.233904 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.234923 kubelet[2601]: E0513 00:30:34.234673 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.234923 kubelet[2601]: W0513 00:30:34.234725 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.234923 kubelet[2601]: E0513 00:30:34.234753 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.235300 kubelet[2601]: E0513 00:30:34.235267 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.235300 kubelet[2601]: W0513 00:30:34.235296 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.235444 kubelet[2601]: E0513 00:30:34.235335 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.235650 kubelet[2601]: E0513 00:30:34.235633 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.235650 kubelet[2601]: W0513 00:30:34.235649 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.235730 kubelet[2601]: E0513 00:30:34.235664 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.235970 kubelet[2601]: E0513 00:30:34.235945 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.236022 kubelet[2601]: W0513 00:30:34.235975 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.236022 kubelet[2601]: E0513 00:30:34.235993 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.236249 kubelet[2601]: E0513 00:30:34.236230 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.236249 kubelet[2601]: W0513 00:30:34.236246 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.236334 kubelet[2601]: E0513 00:30:34.236263 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.236510 kubelet[2601]: E0513 00:30:34.236497 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.236536 kubelet[2601]: W0513 00:30:34.236509 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.236536 kubelet[2601]: E0513 00:30:34.236526 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.236783 kubelet[2601]: E0513 00:30:34.236769 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.236823 kubelet[2601]: W0513 00:30:34.236792 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.236845 kubelet[2601]: E0513 00:30:34.236827 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.237007 kubelet[2601]: E0513 00:30:34.236993 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.237036 kubelet[2601]: W0513 00:30:34.237006 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.237174 kubelet[2601]: E0513 00:30:34.237118 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.237240 kubelet[2601]: E0513 00:30:34.237224 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.237264 kubelet[2601]: W0513 00:30:34.237239 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.237264 kubelet[2601]: E0513 00:30:34.237255 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.237475 kubelet[2601]: E0513 00:30:34.237461 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.237475 kubelet[2601]: W0513 00:30:34.237472 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.237519 kubelet[2601]: E0513 00:30:34.237484 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.237753 kubelet[2601]: E0513 00:30:34.237737 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.237787 kubelet[2601]: W0513 00:30:34.237753 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.237787 kubelet[2601]: E0513 00:30:34.237772 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.238147 kubelet[2601]: E0513 00:30:34.238118 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.238193 kubelet[2601]: W0513 00:30:34.238146 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.238193 kubelet[2601]: E0513 00:30:34.238186 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.238520 kubelet[2601]: E0513 00:30:34.238497 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.238520 kubelet[2601]: W0513 00:30:34.238511 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.238564 kubelet[2601]: E0513 00:30:34.238529 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.238796 kubelet[2601]: E0513 00:30:34.238778 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.238796 kubelet[2601]: W0513 00:30:34.238791 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.238856 kubelet[2601]: E0513 00:30:34.238807 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.239041 kubelet[2601]: E0513 00:30:34.239028 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.239041 kubelet[2601]: W0513 00:30:34.239040 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.239094 kubelet[2601]: E0513 00:30:34.239056 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.239383 kubelet[2601]: E0513 00:30:34.239366 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.239383 kubelet[2601]: W0513 00:30:34.239381 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.239436 kubelet[2601]: E0513 00:30:34.239397 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.239636 kubelet[2601]: E0513 00:30:34.239618 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:30:34.239636 kubelet[2601]: W0513 00:30:34.239632 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:30:34.239692 kubelet[2601]: E0513 00:30:34.239644 2601 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:30:34.914026 containerd[1465]: time="2025-05-13T00:30:34.913965575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:34.914864 containerd[1465]: time="2025-05-13T00:30:34.914821168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 00:30:34.916003 containerd[1465]: time="2025-05-13T00:30:34.915974503Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:34.919726 containerd[1465]: time="2025-05-13T00:30:34.919593034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.80198019s" May 13 00:30:34.919726 containerd[1465]: time="2025-05-13T00:30:34.919681090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 00:30:34.920220 containerd[1465]: time="2025-05-13T00:30:34.920177376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:34.924075 containerd[1465]: time="2025-05-13T00:30:34.924045007Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:30:34.938768 containerd[1465]: time="2025-05-13T00:30:34.938724471Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9\"" May 13 00:30:34.939271 containerd[1465]: time="2025-05-13T00:30:34.939217510Z" level=info msg="StartContainer for \"9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9\"" May 13 00:30:34.972837 systemd[1]: Started cri-containerd-9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9.scope - libcontainer container 9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9. May 13 00:30:35.002335 containerd[1465]: time="2025-05-13T00:30:35.002294282Z" level=info msg="StartContainer for \"9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9\" returns successfully" May 13 00:30:35.016697 systemd[1]: cri-containerd-9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9.scope: Deactivated successfully. May 13 00:30:35.085865 containerd[1465]: time="2025-05-13T00:30:35.085797684Z" level=info msg="shim disconnected" id=9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9 namespace=k8s.io May 13 00:30:35.085865 containerd[1465]: time="2025-05-13T00:30:35.085847419Z" level=warning msg="cleaning up after shim disconnected" id=9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9 namespace=k8s.io May 13 00:30:35.085865 containerd[1465]: time="2025-05-13T00:30:35.085856706Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:35.122636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9612241b776f1800a7de6f4cfcf47a7dbe83fc95acb3ffec4e7bd49c83fcdaa9-rootfs.mount: Deactivated successfully. May 13 00:30:35.177423 kubelet[2601]: I0513 00:30:35.177308 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:30:35.178148 kubelet[2601]: E0513 00:30:35.178117 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:35.179086 kubelet[2601]: E0513 00:30:35.178399 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:35.179174 containerd[1465]: time="2025-05-13T00:30:35.179051152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:30:36.078209 kubelet[2601]: E0513 00:30:36.078122 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:37.932120 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:60402.service - OpenSSH per-connection server daemon (10.0.0.1:60402). May 13 00:30:37.981288 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 60402 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:37.982760 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:37.986525 systemd-logind[1449]: New session 9 of user core. May 13 00:30:37.991881 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:30:38.086354 kubelet[2601]: E0513 00:30:38.086308 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:38.098463 sshd[3332]: pam_unix(sshd:session): session closed for user core May 13 00:30:38.103007 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:60402.service: Deactivated successfully. May 13 00:30:38.104906 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:30:38.105472 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. May 13 00:30:38.106422 systemd-logind[1449]: Removed session 9. May 13 00:30:40.077941 kubelet[2601]: E0513 00:30:40.077880 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:40.851571 containerd[1465]: time="2025-05-13T00:30:40.851507042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.852353 containerd[1465]: time="2025-05-13T00:30:40.852299865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 00:30:40.853571 containerd[1465]: time="2025-05-13T00:30:40.853542954Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.855690 containerd[1465]: time="2025-05-13T00:30:40.855660630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:40.856358 containerd[1465]: time="2025-05-13T00:30:40.856324390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.677239124s" May 13 00:30:40.856393 containerd[1465]: time="2025-05-13T00:30:40.856358194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 00:30:40.859469 containerd[1465]: time="2025-05-13T00:30:40.859428984Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:30:40.875495 containerd[1465]: time="2025-05-13T00:30:40.875439357Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55\"" May 13 00:30:40.876051 containerd[1465]: time="2025-05-13T00:30:40.875998410Z" level=info msg="StartContainer for \"937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55\"" May 13 00:30:40.916966 systemd[1]: Started cri-containerd-937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55.scope - libcontainer container 937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55. May 13 00:30:41.106676 containerd[1465]: time="2025-05-13T00:30:41.106227876Z" level=info msg="StartContainer for \"937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55\" returns successfully" May 13 00:30:42.078266 kubelet[2601]: E0513 00:30:42.078206 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:42.195838 kubelet[2601]: E0513 00:30:42.195806 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:42.605976 containerd[1465]: time="2025-05-13T00:30:42.605759493Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:30:42.615990 systemd[1]: cri-containerd-937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55.scope: Deactivated successfully. May 13 00:30:42.635249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55-rootfs.mount: Deactivated successfully. May 13 00:30:42.662381 kubelet[2601]: I0513 00:30:42.662330 2601 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:30:42.766763 kubelet[2601]: I0513 00:30:42.766656 2601 topology_manager.go:215] "Topology Admit Handler" podUID="cca342e9-5897-4c4b-a21f-fa72a6bbaed0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bdk6n" May 13 00:30:42.767811 kubelet[2601]: I0513 00:30:42.767774 2601 topology_manager.go:215] "Topology Admit Handler" podUID="34fdffa2-72d1-4818-a707-8a94261c161f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nntb9" May 13 00:30:42.769214 kubelet[2601]: I0513 00:30:42.769180 2601 topology_manager.go:215] "Topology Admit Handler" podUID="bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5" podNamespace="calico-apiserver" podName="calico-apiserver-7df7d6b8db-vgcjq" May 13 00:30:42.771929 kubelet[2601]: I0513 00:30:42.771896 2601 topology_manager.go:215] "Topology Admit Handler" podUID="fa5c31c3-8ed9-46d3-b4b2-83261ae6da34" podNamespace="calico-apiserver" podName="calico-apiserver-7df7d6b8db-hzpnr" May 13 00:30:42.772061 kubelet[2601]: I0513 00:30:42.772037 2601 topology_manager.go:215] "Topology Admit Handler" podUID="3f24e1a4-b6cd-453b-b46b-da5f52cab0da" podNamespace="calico-system" podName="calico-kube-controllers-5574457795-mzvpl" May 13 00:30:42.779951 systemd[1]: Created slice kubepods-burstable-pod34fdffa2_72d1_4818_a707_8a94261c161f.slice - libcontainer container kubepods-burstable-pod34fdffa2_72d1_4818_a707_8a94261c161f.slice. May 13 00:30:42.784340 systemd[1]: Created slice kubepods-burstable-podcca342e9_5897_4c4b_a21f_fa72a6bbaed0.slice - libcontainer container kubepods-burstable-podcca342e9_5897_4c4b_a21f_fa72a6bbaed0.slice. May 13 00:30:42.788959 kubelet[2601]: I0513 00:30:42.788935 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fdffa2-72d1-4818-a707-8a94261c161f-config-volume\") pod \"coredns-7db6d8ff4d-nntb9\" (UID: \"34fdffa2-72d1-4818-a707-8a94261c161f\") " pod="kube-system/coredns-7db6d8ff4d-nntb9" May 13 00:30:42.789045 kubelet[2601]: I0513 00:30:42.788968 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv2n8\" (UniqueName: \"kubernetes.io/projected/34fdffa2-72d1-4818-a707-8a94261c161f-kube-api-access-gv2n8\") pod \"coredns-7db6d8ff4d-nntb9\" (UID: \"34fdffa2-72d1-4818-a707-8a94261c161f\") " pod="kube-system/coredns-7db6d8ff4d-nntb9" May 13 00:30:42.789045 kubelet[2601]: I0513 00:30:42.788989 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49cf2\" (UniqueName: \"kubernetes.io/projected/bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5-kube-api-access-49cf2\") pod \"calico-apiserver-7df7d6b8db-vgcjq\" (UID: \"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5\") " pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" May 13 00:30:42.789045 kubelet[2601]: I0513 00:30:42.789012 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f24e1a4-b6cd-453b-b46b-da5f52cab0da-tigera-ca-bundle\") pod \"calico-kube-controllers-5574457795-mzvpl\" (UID: \"3f24e1a4-b6cd-453b-b46b-da5f52cab0da\") " pod="calico-system/calico-kube-controllers-5574457795-mzvpl" May 13 00:30:42.789045 kubelet[2601]: I0513 00:30:42.789030 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa5c31c3-8ed9-46d3-b4b2-83261ae6da34-calico-apiserver-certs\") pod \"calico-apiserver-7df7d6b8db-hzpnr\" (UID: \"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34\") " pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" May 13 00:30:42.789045 kubelet[2601]: I0513 00:30:42.789045 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cca342e9-5897-4c4b-a21f-fa72a6bbaed0-config-volume\") pod \"coredns-7db6d8ff4d-bdk6n\" (UID: \"cca342e9-5897-4c4b-a21f-fa72a6bbaed0\") " pod="kube-system/coredns-7db6d8ff4d-bdk6n" May 13 00:30:42.789296 kubelet[2601]: I0513 00:30:42.789064 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9m95\" (UniqueName: \"kubernetes.io/projected/3f24e1a4-b6cd-453b-b46b-da5f52cab0da-kube-api-access-c9m95\") pod \"calico-kube-controllers-5574457795-mzvpl\" (UID: \"3f24e1a4-b6cd-453b-b46b-da5f52cab0da\") " pod="calico-system/calico-kube-controllers-5574457795-mzvpl" May 13 00:30:42.789296 kubelet[2601]: I0513 00:30:42.789080 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58sl\" (UniqueName: \"kubernetes.io/projected/cca342e9-5897-4c4b-a21f-fa72a6bbaed0-kube-api-access-b58sl\") pod \"coredns-7db6d8ff4d-bdk6n\" (UID: \"cca342e9-5897-4c4b-a21f-fa72a6bbaed0\") " pod="kube-system/coredns-7db6d8ff4d-bdk6n" May 13 00:30:42.789296 kubelet[2601]: I0513 00:30:42.789094 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zjb\" (UniqueName: \"kubernetes.io/projected/fa5c31c3-8ed9-46d3-b4b2-83261ae6da34-kube-api-access-44zjb\") pod \"calico-apiserver-7df7d6b8db-hzpnr\" (UID: \"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34\") " pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" May 13 00:30:42.789296 kubelet[2601]: I0513 00:30:42.789111 2601 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5-calico-apiserver-certs\") pod \"calico-apiserver-7df7d6b8db-vgcjq\" (UID: \"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5\") " pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" May 13 00:30:42.789373 systemd[1]: Created slice kubepods-besteffort-podbc0f314a_a9ef_4b3b_ac1b_b2957bf09ae5.slice - libcontainer container kubepods-besteffort-podbc0f314a_a9ef_4b3b_ac1b_b2957bf09ae5.slice. May 13 00:30:42.794480 systemd[1]: Created slice kubepods-besteffort-pod3f24e1a4_b6cd_453b_b46b_da5f52cab0da.slice - libcontainer container kubepods-besteffort-pod3f24e1a4_b6cd_453b_b46b_da5f52cab0da.slice. May 13 00:30:42.799274 systemd[1]: Created slice kubepods-besteffort-podfa5c31c3_8ed9_46d3_b4b2_83261ae6da34.slice - libcontainer container kubepods-besteffort-podfa5c31c3_8ed9_46d3_b4b2_83261ae6da34.slice. May 13 00:30:43.037544 containerd[1465]: time="2025-05-13T00:30:43.037371788Z" level=info msg="shim disconnected" id=937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55 namespace=k8s.io May 13 00:30:43.037544 containerd[1465]: time="2025-05-13T00:30:43.037425629Z" level=warning msg="cleaning up after shim disconnected" id=937749b3a7376f478171a40c00164abeb00cf4b004cbce28d67121d81b911c55 namespace=k8s.io May 13 00:30:43.037544 containerd[1465]: time="2025-05-13T00:30:43.037433824Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:43.082284 kubelet[2601]: E0513 00:30:43.082243 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:43.082882 containerd[1465]: time="2025-05-13T00:30:43.082838512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nntb9,Uid:34fdffa2-72d1-4818-a707-8a94261c161f,Namespace:kube-system,Attempt:0,}" May 13 00:30:43.087535 kubelet[2601]: E0513 00:30:43.087513 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:43.087852 containerd[1465]: time="2025-05-13T00:30:43.087786719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdk6n,Uid:cca342e9-5897-4c4b-a21f-fa72a6bbaed0,Namespace:kube-system,Attempt:0,}" May 13 00:30:43.092840 containerd[1465]: time="2025-05-13T00:30:43.092804809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-vgcjq,Uid:bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5,Namespace:calico-apiserver,Attempt:0,}" May 13 00:30:43.097637 containerd[1465]: time="2025-05-13T00:30:43.097432293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5574457795-mzvpl,Uid:3f24e1a4-b6cd-453b-b46b-da5f52cab0da,Namespace:calico-system,Attempt:0,}" May 13 00:30:43.102259 containerd[1465]: time="2025-05-13T00:30:43.102231360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-hzpnr,Uid:fa5c31c3-8ed9-46d3-b4b2-83261ae6da34,Namespace:calico-apiserver,Attempt:0,}" May 13 00:30:43.121111 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:37230.service - OpenSSH per-connection server daemon (10.0.0.1:37230). May 13 00:30:43.165281 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 37230 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:43.166390 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:43.175541 systemd-logind[1449]: New session 10 of user core. May 13 00:30:43.180070 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:30:43.202375 kubelet[2601]: E0513 00:30:43.202341 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:43.209943 containerd[1465]: time="2025-05-13T00:30:43.209904706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:30:43.215869 containerd[1465]: time="2025-05-13T00:30:43.215817649Z" level=error msg="Failed to destroy network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.217244 containerd[1465]: time="2025-05-13T00:30:43.217124007Z" level=error msg="Failed to destroy network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.217244 containerd[1465]: time="2025-05-13T00:30:43.217370320Z" level=error msg="encountered an error cleaning up failed sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.217244 containerd[1465]: time="2025-05-13T00:30:43.217424582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nntb9,Uid:34fdffa2-72d1-4818-a707-8a94261c161f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.217679 kubelet[2601]: E0513 00:30:43.217627 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.217762 kubelet[2601]: E0513 00:30:43.217721 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nntb9" May 13 00:30:43.217762 kubelet[2601]: E0513 00:30:43.217742 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nntb9" May 13 00:30:43.217836 kubelet[2601]: E0513 00:30:43.217798 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nntb9_kube-system(34fdffa2-72d1-4818-a707-8a94261c161f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nntb9_kube-system(34fdffa2-72d1-4818-a707-8a94261c161f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nntb9" podUID="34fdffa2-72d1-4818-a707-8a94261c161f" May 13 00:30:43.217979 containerd[1465]: time="2025-05-13T00:30:43.217937467Z" level=error msg="encountered an error cleaning up failed sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.218091 containerd[1465]: time="2025-05-13T00:30:43.218072050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdk6n,Uid:cca342e9-5897-4c4b-a21f-fa72a6bbaed0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.218546 kubelet[2601]: E0513 00:30:43.218414 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.218546 kubelet[2601]: E0513 00:30:43.218454 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bdk6n" May 13 00:30:43.218546 kubelet[2601]: E0513 00:30:43.218470 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bdk6n" May 13 00:30:43.218794 kubelet[2601]: E0513 00:30:43.218513 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bdk6n_kube-system(cca342e9-5897-4c4b-a21f-fa72a6bbaed0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bdk6n_kube-system(cca342e9-5897-4c4b-a21f-fa72a6bbaed0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bdk6n" podUID="cca342e9-5897-4c4b-a21f-fa72a6bbaed0" May 13 00:30:43.243416 containerd[1465]: time="2025-05-13T00:30:43.243333547Z" level=error msg="Failed to destroy network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.244515 containerd[1465]: time="2025-05-13T00:30:43.244005682Z" level=error msg="encountered an error cleaning up failed sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.244515 containerd[1465]: time="2025-05-13T00:30:43.244052580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-vgcjq,Uid:bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.244825 kubelet[2601]: E0513 00:30:43.244257 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.244825 kubelet[2601]: E0513 00:30:43.244309 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" May 13 00:30:43.244825 kubelet[2601]: E0513 00:30:43.244335 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" May 13 00:30:43.244913 kubelet[2601]: E0513 00:30:43.244370 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df7d6b8db-vgcjq_calico-apiserver(bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df7d6b8db-vgcjq_calico-apiserver(bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" podUID="bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5" May 13 00:30:43.253047 containerd[1465]: time="2025-05-13T00:30:43.252981705Z" level=error msg="Failed to destroy network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.253592 containerd[1465]: time="2025-05-13T00:30:43.253553331Z" level=error msg="encountered an error cleaning up failed sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.253681 containerd[1465]: time="2025-05-13T00:30:43.253662246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5574457795-mzvpl,Uid:3f24e1a4-b6cd-453b-b46b-da5f52cab0da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.254064 kubelet[2601]: E0513 00:30:43.254021 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.254124 kubelet[2601]: E0513 00:30:43.254076 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5574457795-mzvpl" May 13 00:30:43.254124 kubelet[2601]: E0513 00:30:43.254099 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5574457795-mzvpl" May 13 00:30:43.254189 kubelet[2601]: E0513 00:30:43.254150 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5574457795-mzvpl_calico-system(3f24e1a4-b6cd-453b-b46b-da5f52cab0da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5574457795-mzvpl_calico-system(3f24e1a4-b6cd-453b-b46b-da5f52cab0da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5574457795-mzvpl" podUID="3f24e1a4-b6cd-453b-b46b-da5f52cab0da" May 13 00:30:43.266603 containerd[1465]: time="2025-05-13T00:30:43.266538998Z" level=error msg="Failed to destroy network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.267181 containerd[1465]: time="2025-05-13T00:30:43.267158362Z" level=error msg="encountered an error cleaning up failed sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.267233 containerd[1465]: time="2025-05-13T00:30:43.267198838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-hzpnr,Uid:fa5c31c3-8ed9-46d3-b4b2-83261ae6da34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.267631 kubelet[2601]: E0513 00:30:43.267478 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:43.267631 kubelet[2601]: E0513 00:30:43.267526 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" May 13 00:30:43.267631 kubelet[2601]: E0513 00:30:43.267553 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" May 13 00:30:43.267772 kubelet[2601]: E0513 00:30:43.267591 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7df7d6b8db-hzpnr_calico-apiserver(fa5c31c3-8ed9-46d3-b4b2-83261ae6da34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7df7d6b8db-hzpnr_calico-apiserver(fa5c31c3-8ed9-46d3-b4b2-83261ae6da34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" podUID="fa5c31c3-8ed9-46d3-b4b2-83261ae6da34" May 13 00:30:43.294436 sshd[3438]: pam_unix(sshd:session): session closed for user core May 13 00:30:43.298970 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:37230.service: Deactivated successfully. May 13 00:30:43.301132 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:30:43.301760 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. May 13 00:30:43.302579 systemd-logind[1449]: Removed session 10. May 13 00:30:44.086895 systemd[1]: Created slice kubepods-besteffort-podae4c6d9d_b177_405f_84c6_f30031c5dd17.slice - libcontainer container kubepods-besteffort-podae4c6d9d_b177_405f_84c6_f30031c5dd17.slice. May 13 00:30:44.088888 containerd[1465]: time="2025-05-13T00:30:44.088855156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9vdg,Uid:ae4c6d9d-b177-405f-84c6-f30031c5dd17,Namespace:calico-system,Attempt:0,}" May 13 00:30:44.205062 kubelet[2601]: I0513 00:30:44.205027 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:44.205747 containerd[1465]: time="2025-05-13T00:30:44.205686249Z" level=info msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" May 13 00:30:44.205908 containerd[1465]: time="2025-05-13T00:30:44.205875716Z" level=info msg="Ensure that sandbox 93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e in task-service has been cleanup successfully" May 13 00:30:44.206753 kubelet[2601]: I0513 00:30:44.206735 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:44.207411 containerd[1465]: time="2025-05-13T00:30:44.207378693Z" level=info msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" May 13 00:30:44.207744 containerd[1465]: time="2025-05-13T00:30:44.207720936Z" level=info msg="Ensure that sandbox cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1 in task-service has been cleanup successfully" May 13 00:30:44.207887 kubelet[2601]: I0513 00:30:44.207860 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:44.208363 containerd[1465]: time="2025-05-13T00:30:44.208323749Z" level=info msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" May 13 00:30:44.208554 containerd[1465]: time="2025-05-13T00:30:44.208529156Z" level=info msg="Ensure that sandbox cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046 in task-service has been cleanup successfully" May 13 00:30:44.225116 kubelet[2601]: I0513 00:30:44.225050 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:44.226075 containerd[1465]: time="2025-05-13T00:30:44.225910014Z" level=info msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" May 13 00:30:44.226571 containerd[1465]: time="2025-05-13T00:30:44.226544738Z" level=info msg="Ensure that sandbox 76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c in task-service has been cleanup successfully" May 13 00:30:44.229191 kubelet[2601]: I0513 00:30:44.229158 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:44.229819 containerd[1465]: time="2025-05-13T00:30:44.229789740Z" level=info msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" May 13 00:30:44.230225 containerd[1465]: time="2025-05-13T00:30:44.229913413Z" level=info msg="Ensure that sandbox 041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202 in task-service has been cleanup successfully" May 13 00:30:44.276266 containerd[1465]: time="2025-05-13T00:30:44.276209873Z" level=error msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" failed" error="failed to destroy network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.277003 containerd[1465]: time="2025-05-13T00:30:44.276950296Z" level=error msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" failed" error="failed to destroy network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.277307 kubelet[2601]: E0513 00:30:44.277261 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:44.277372 kubelet[2601]: E0513 00:30:44.277324 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c"} May 13 00:30:44.277410 kubelet[2601]: E0513 00:30:44.277377 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cca342e9-5897-4c4b-a21f-fa72a6bbaed0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:44.277410 kubelet[2601]: E0513 00:30:44.277399 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cca342e9-5897-4c4b-a21f-fa72a6bbaed0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bdk6n" podUID="cca342e9-5897-4c4b-a21f-fa72a6bbaed0" May 13 00:30:44.277557 kubelet[2601]: E0513 00:30:44.277261 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:44.277557 kubelet[2601]: E0513 00:30:44.277424 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e"} May 13 00:30:44.277557 kubelet[2601]: E0513 00:30:44.277440 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:44.277557 kubelet[2601]: E0513 00:30:44.277454 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" podUID="fa5c31c3-8ed9-46d3-b4b2-83261ae6da34" May 13 00:30:44.285540 containerd[1465]: time="2025-05-13T00:30:44.285388545Z" level=error msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" failed" error="failed to destroy network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.285883 kubelet[2601]: E0513 00:30:44.285764 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:44.285883 kubelet[2601]: E0513 00:30:44.285803 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202"} May 13 00:30:44.285883 kubelet[2601]: E0513 00:30:44.285835 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34fdffa2-72d1-4818-a707-8a94261c161f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:44.285883 kubelet[2601]: E0513 00:30:44.285856 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34fdffa2-72d1-4818-a707-8a94261c161f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nntb9" podUID="34fdffa2-72d1-4818-a707-8a94261c161f" May 13 00:30:44.287502 containerd[1465]: time="2025-05-13T00:30:44.287414646Z" level=error msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" failed" error="failed to destroy network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.287648 kubelet[2601]: E0513 00:30:44.287567 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:44.287648 kubelet[2601]: E0513 00:30:44.287592 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1"} May 13 00:30:44.287648 kubelet[2601]: E0513 00:30:44.287639 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f24e1a4-b6cd-453b-b46b-da5f52cab0da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:44.287893 kubelet[2601]: E0513 00:30:44.287654 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f24e1a4-b6cd-453b-b46b-da5f52cab0da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5574457795-mzvpl" podUID="3f24e1a4-b6cd-453b-b46b-da5f52cab0da" May 13 00:30:44.288145 containerd[1465]: time="2025-05-13T00:30:44.288094013Z" level=error msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" failed" error="failed to destroy network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.288308 kubelet[2601]: E0513 00:30:44.288273 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:44.288308 kubelet[2601]: E0513 00:30:44.288303 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046"} May 13 00:30:44.288373 kubelet[2601]: E0513 00:30:44.288326 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:44.288373 kubelet[2601]: E0513 00:30:44.288346 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" podUID="bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5" May 13 00:30:44.291893 containerd[1465]: time="2025-05-13T00:30:44.291848484Z" level=error msg="Failed to destroy network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.292302 containerd[1465]: time="2025-05-13T00:30:44.292252604Z" level=error msg="encountered an error cleaning up failed sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.292412 containerd[1465]: time="2025-05-13T00:30:44.292304522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9vdg,Uid:ae4c6d9d-b177-405f-84c6-f30031c5dd17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.292974 kubelet[2601]: E0513 00:30:44.292638 2601 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:44.292974 kubelet[2601]: E0513 00:30:44.292721 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:44.292974 kubelet[2601]: E0513 00:30:44.292741 2601 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b9vdg" May 13 00:30:44.293129 kubelet[2601]: E0513 00:30:44.292777 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b9vdg_calico-system(ae4c6d9d-b177-405f-84c6-f30031c5dd17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b9vdg_calico-system(ae4c6d9d-b177-405f-84c6-f30031c5dd17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:44.294102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2-shm.mount: Deactivated successfully. May 13 00:30:45.233099 kubelet[2601]: I0513 00:30:45.233063 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:45.233737 containerd[1465]: time="2025-05-13T00:30:45.233688907Z" level=info msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" May 13 00:30:45.233987 containerd[1465]: time="2025-05-13T00:30:45.233874054Z" level=info msg="Ensure that sandbox f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2 in task-service has been cleanup successfully" May 13 00:30:45.260922 containerd[1465]: time="2025-05-13T00:30:45.260853577Z" level=error msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" failed" error="failed to destroy network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:30:45.261138 kubelet[2601]: E0513 00:30:45.261087 2601 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:45.261197 kubelet[2601]: E0513 00:30:45.261138 2601 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2"} May 13 00:30:45.261197 kubelet[2601]: E0513 00:30:45.261171 2601 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:30:45.261292 kubelet[2601]: E0513 00:30:45.261194 2601 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae4c6d9d-b177-405f-84c6-f30031c5dd17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b9vdg" podUID="ae4c6d9d-b177-405f-84c6-f30031c5dd17" May 13 00:30:48.313535 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:44184.service - OpenSSH per-connection server daemon (10.0.0.1:44184). May 13 00:30:48.362087 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 44184 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:48.363624 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:48.367935 systemd-logind[1449]: New session 11 of user core. May 13 00:30:48.377902 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:30:48.496072 sshd[3797]: pam_unix(sshd:session): session closed for user core May 13 00:30:48.506876 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:44184.service: Deactivated successfully. May 13 00:30:48.509167 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:30:48.511101 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. May 13 00:30:48.517101 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:44198.service - OpenSSH per-connection server daemon (10.0.0.1:44198). May 13 00:30:48.518137 systemd-logind[1449]: Removed session 11. May 13 00:30:48.545041 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:48.546719 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:48.551453 systemd-logind[1449]: New session 12 of user core. May 13 00:30:48.559883 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:30:48.715822 sshd[3815]: pam_unix(sshd:session): session closed for user core May 13 00:30:48.725927 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:44198.service: Deactivated successfully. May 13 00:30:48.733199 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:30:48.735931 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. May 13 00:30:48.744110 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:44200.service - OpenSSH per-connection server daemon (10.0.0.1:44200). May 13 00:30:48.746350 systemd-logind[1449]: Removed session 12. May 13 00:30:48.777512 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 44200 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:48.779506 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:48.784445 systemd-logind[1449]: New session 13 of user core. May 13 00:30:48.791061 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:30:48.915331 sshd[3828]: pam_unix(sshd:session): session closed for user core May 13 00:30:48.918156 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:44200.service: Deactivated successfully. May 13 00:30:48.920070 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:30:48.922377 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. May 13 00:30:48.923773 systemd-logind[1449]: Removed session 13. May 13 00:30:50.065697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690092858.mount: Deactivated successfully. May 13 00:30:51.477267 containerd[1465]: time="2025-05-13T00:30:51.477215175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:51.478369 containerd[1465]: time="2025-05-13T00:30:51.478333556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 00:30:51.479813 containerd[1465]: time="2025-05-13T00:30:51.479765296Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:51.512617 containerd[1465]: time="2025-05-13T00:30:51.512567408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:30:51.513615 containerd[1465]: time="2025-05-13T00:30:51.513342484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.303193268s" May 13 00:30:51.513615 containerd[1465]: time="2025-05-13T00:30:51.513380876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 00:30:51.540780 containerd[1465]: time="2025-05-13T00:30:51.540729721Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:30:51.570588 containerd[1465]: time="2025-05-13T00:30:51.570543048Z" level=info msg="CreateContainer within sandbox \"79b1c57151bcca3f4273e92c5896dba9962a592f213856e866663ee875b0e700\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64\"" May 13 00:30:51.571673 containerd[1465]: time="2025-05-13T00:30:51.571607397Z" level=info msg="StartContainer for \"2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64\"" May 13 00:30:51.640853 systemd[1]: Started cri-containerd-2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64.scope - libcontainer container 2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64. May 13 00:30:51.859097 containerd[1465]: time="2025-05-13T00:30:51.859054407Z" level=info msg="StartContainer for \"2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64\" returns successfully" May 13 00:30:51.886024 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:30:51.886143 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:30:52.247885 kubelet[2601]: E0513 00:30:52.247762 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:52.258437 kubelet[2601]: I0513 00:30:52.258293 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8rlsp" podStartSLOduration=2.006507201 podStartE2EDuration="23.258275065s" podCreationTimestamp="2025-05-13 00:30:29 +0000 UTC" firstStartedPulling="2025-05-13 00:30:30.262453221 +0000 UTC m=+22.266097342" lastFinishedPulling="2025-05-13 00:30:51.514221085 +0000 UTC m=+43.517865206" observedRunningTime="2025-05-13 00:30:52.257478188 +0000 UTC m=+44.261122309" watchObservedRunningTime="2025-05-13 00:30:52.258275065 +0000 UTC m=+44.261919186" May 13 00:30:53.274571 kubelet[2601]: I0513 00:30:53.274521 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:30:53.275249 kubelet[2601]: E0513 00:30:53.275228 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:53.927153 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). May 13 00:30:53.963414 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:53.965553 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:53.969913 systemd-logind[1449]: New session 14 of user core. May 13 00:30:53.979862 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:30:54.094623 sshd[4015]: pam_unix(sshd:session): session closed for user core May 13 00:30:54.098455 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:44208.service: Deactivated successfully. May 13 00:30:54.100570 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:30:54.101325 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. May 13 00:30:54.102281 systemd-logind[1449]: Removed session 14. May 13 00:30:54.485093 kubelet[2601]: I0513 00:30:54.485047 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:30:54.485649 kubelet[2601]: E0513 00:30:54.485585 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.078508 containerd[1465]: time="2025-05-13T00:30:55.078463577Z" level=info msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.128 [INFO][4074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.128 [INFO][4074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" iface="eth0" netns="/var/run/netns/cni-fd629c7d-ee79-b37e-a9c9-f603ab12cbf7" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.128 [INFO][4074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" iface="eth0" netns="/var/run/netns/cni-fd629c7d-ee79-b37e-a9c9-f603ab12cbf7" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.129 [INFO][4074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" iface="eth0" netns="/var/run/netns/cni-fd629c7d-ee79-b37e-a9c9-f603ab12cbf7" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.129 [INFO][4074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.129 [INFO][4074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.180 [INFO][4084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.180 [INFO][4084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.180 [INFO][4084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.186 [WARNING][4084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.186 [INFO][4084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.188 [INFO][4084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:55.193808 containerd[1465]: 2025-05-13 00:30:55.191 [INFO][4074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:30:55.194190 containerd[1465]: time="2025-05-13T00:30:55.193982231Z" level=info msg="TearDown network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" successfully" May 13 00:30:55.194190 containerd[1465]: time="2025-05-13T00:30:55.194015643Z" level=info msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" returns successfully" May 13 00:30:55.195003 kubelet[2601]: E0513 00:30:55.194951 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.195994 containerd[1465]: time="2025-05-13T00:30:55.195323960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdk6n,Uid:cca342e9-5897-4c4b-a21f-fa72a6bbaed0,Namespace:kube-system,Attempt:1,}" May 13 00:30:55.196678 systemd[1]: run-netns-cni\x2dfd629c7d\x2dee79\x2db37e\x2da9c9\x2df603ab12cbf7.mount: Deactivated successfully. May 13 00:30:55.281949 kubelet[2601]: E0513 00:30:55.281911 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.350249 systemd-networkd[1397]: cali4da8a03272a: Link UP May 13 00:30:55.351655 systemd-networkd[1397]: cali4da8a03272a: Gained carrier May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.244 [INFO][4092] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.263 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0 coredns-7db6d8ff4d- kube-system cca342e9-5897-4c4b-a21f-fa72a6bbaed0 939 0 2025-05-13 00:30:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bdk6n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4da8a03272a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.264 [INFO][4092] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.298 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" HandleID="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.310 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" HandleID="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bdk6n", "timestamp":"2025-05-13 00:30:55.298262475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.310 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.310 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.310 [INFO][4132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.312 [INFO][4132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.317 [INFO][4132] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.321 [INFO][4132] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.323 [INFO][4132] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.325 [INFO][4132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.325 [INFO][4132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.327 [INFO][4132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05 May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.331 [INFO][4132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.336 [INFO][4132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.336 [INFO][4132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" host="localhost" May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.336 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:55.368266 containerd[1465]: 2025-05-13 00:30:55.336 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" HandleID="k8s-pod-network.b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.339 [INFO][4092] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cca342e9-5897-4c4b-a21f-fa72a6bbaed0", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bdk6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4da8a03272a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.339 [INFO][4092] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.339 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4da8a03272a ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.352 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.353 [INFO][4092] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cca342e9-5897-4c4b-a21f-fa72a6bbaed0", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05", Pod:"coredns-7db6d8ff4d-bdk6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4da8a03272a", MAC:"52:63:a4:c5:ec:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:55.368847 containerd[1465]: 2025-05-13 00:30:55.360 [INFO][4092] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bdk6n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:30:55.374774 kernel: bpftool[4191]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:30:55.496286 containerd[1465]: time="2025-05-13T00:30:55.496017171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:55.496286 containerd[1465]: time="2025-05-13T00:30:55.496086792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:55.496286 containerd[1465]: time="2025-05-13T00:30:55.496097532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:55.496286 containerd[1465]: time="2025-05-13T00:30:55.496191889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:55.516830 systemd[1]: Started cri-containerd-b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05.scope - libcontainer container b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05. May 13 00:30:55.529191 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:55.558388 containerd[1465]: time="2025-05-13T00:30:55.558340013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bdk6n,Uid:cca342e9-5897-4c4b-a21f-fa72a6bbaed0,Namespace:kube-system,Attempt:1,} returns sandbox id \"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05\"" May 13 00:30:55.559145 kubelet[2601]: E0513 00:30:55.559122 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:55.561325 containerd[1465]: time="2025-05-13T00:30:55.561288582Z" level=info msg="CreateContainer within sandbox \"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:30:55.608844 systemd-networkd[1397]: vxlan.calico: Link UP May 13 00:30:55.608852 systemd-networkd[1397]: vxlan.calico: Gained carrier May 13 00:30:56.144055 containerd[1465]: time="2025-05-13T00:30:56.144021276Z" level=info msg="CreateContainer within sandbox \"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c96726e019427ac662a8d21ca9d0f517c861bc75f1e6d8116718c540689c26ad\"" May 13 00:30:56.144518 containerd[1465]: time="2025-05-13T00:30:56.144494184Z" level=info msg="StartContainer for \"c96726e019427ac662a8d21ca9d0f517c861bc75f1e6d8116718c540689c26ad\"" May 13 00:30:56.171855 systemd[1]: Started cri-containerd-c96726e019427ac662a8d21ca9d0f517c861bc75f1e6d8116718c540689c26ad.scope - libcontainer container c96726e019427ac662a8d21ca9d0f517c861bc75f1e6d8116718c540689c26ad. May 13 00:30:56.207436 containerd[1465]: time="2025-05-13T00:30:56.207386346Z" level=info msg="StartContainer for \"c96726e019427ac662a8d21ca9d0f517c861bc75f1e6d8116718c540689c26ad\" returns successfully" May 13 00:30:56.291226 kubelet[2601]: E0513 00:30:56.290157 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:56.448833 kubelet[2601]: I0513 00:30:56.448489 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bdk6n" podStartSLOduration=33.448469483 podStartE2EDuration="33.448469483s" podCreationTimestamp="2025-05-13 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:56.400925785 +0000 UTC m=+48.404569926" watchObservedRunningTime="2025-05-13 00:30:56.448469483 +0000 UTC m=+48.452113604" May 13 00:30:57.017882 systemd-networkd[1397]: cali4da8a03272a: Gained IPv6LL May 13 00:30:57.292496 kubelet[2601]: E0513 00:30:57.292299 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:57.529872 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL May 13 00:30:57.713044 kubelet[2601]: I0513 00:30:57.712978 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:30:57.713888 kubelet[2601]: E0513 00:30:57.713823 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:57.814791 systemd[1]: run-containerd-runc-k8s.io-2a3b2235f0d93fe03f5ecb65ecd52aa1c9a0634a3dadae03a8c08a4c14401c64-runc.unInBC.mount: Deactivated successfully. May 13 00:30:58.081816 containerd[1465]: time="2025-05-13T00:30:58.080960272Z" level=info msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.118 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.119 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" iface="eth0" netns="/var/run/netns/cni-dac1db2a-0477-1462-4a27-5861245d1867" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.120 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" iface="eth0" netns="/var/run/netns/cni-dac1db2a-0477-1462-4a27-5861245d1867" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.121 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" iface="eth0" netns="/var/run/netns/cni-dac1db2a-0477-1462-4a27-5861245d1867" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.121 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.121 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.141 [INFO][4419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.141 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.141 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.145 [WARNING][4419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.145 [INFO][4419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.146 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:58.151535 containerd[1465]: 2025-05-13 00:30:58.148 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:30:58.151948 containerd[1465]: time="2025-05-13T00:30:58.151726251Z" level=info msg="TearDown network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" successfully" May 13 00:30:58.151948 containerd[1465]: time="2025-05-13T00:30:58.151751008Z" level=info msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" returns successfully" May 13 00:30:58.152489 containerd[1465]: time="2025-05-13T00:30:58.152461692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9vdg,Uid:ae4c6d9d-b177-405f-84c6-f30031c5dd17,Namespace:calico-system,Attempt:1,}" May 13 00:30:58.154131 systemd[1]: run-netns-cni\x2ddac1db2a\x2d0477\x2d1462\x2d4a27\x2d5861245d1867.mount: Deactivated successfully. May 13 00:30:58.256887 systemd-networkd[1397]: cali3dc0b9bbabb: Link UP May 13 00:30:58.257138 systemd-networkd[1397]: cali3dc0b9bbabb: Gained carrier May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.195 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b9vdg-eth0 csi-node-driver- calico-system ae4c6d9d-b177-405f-84c6-f30031c5dd17 976 0 2025-05-13 00:30:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b9vdg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3dc0b9bbabb [] []}} ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.196 [INFO][4426] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.222 [INFO][4443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" HandleID="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.229 [INFO][4443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" HandleID="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5d30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b9vdg", "timestamp":"2025-05-13 00:30:58.222414495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.229 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.229 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.229 [INFO][4443] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.231 [INFO][4443] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.234 [INFO][4443] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.238 [INFO][4443] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.239 [INFO][4443] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.241 [INFO][4443] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.241 [INFO][4443] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.242 [INFO][4443] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.247 [INFO][4443] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.251 [INFO][4443] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.251 [INFO][4443] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" host="localhost" May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.251 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:58.271003 containerd[1465]: 2025-05-13 00:30:58.251 [INFO][4443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" HandleID="k8s-pod-network.b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.254 [INFO][4426] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9vdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae4c6d9d-b177-405f-84c6-f30031c5dd17", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b9vdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc0b9bbabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.254 [INFO][4426] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.254 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dc0b9bbabb ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.257 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.257 [INFO][4426] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9vdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae4c6d9d-b177-405f-84c6-f30031c5dd17", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd", Pod:"csi-node-driver-b9vdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc0b9bbabb", MAC:"9e:c1:ca:73:be:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:58.272505 containerd[1465]: 2025-05-13 00:30:58.266 [INFO][4426] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd" Namespace="calico-system" Pod="csi-node-driver-b9vdg" WorkloadEndpoint="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:30:58.290410 containerd[1465]: time="2025-05-13T00:30:58.290268617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:58.291025 containerd[1465]: time="2025-05-13T00:30:58.290968070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:58.291025 containerd[1465]: time="2025-05-13T00:30:58.290988969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:58.291216 containerd[1465]: time="2025-05-13T00:30:58.291179026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:58.293614 kubelet[2601]: E0513 00:30:58.293592 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:58.312832 systemd[1]: Started cri-containerd-b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd.scope - libcontainer container b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd. May 13 00:30:58.323555 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:58.333698 containerd[1465]: time="2025-05-13T00:30:58.333605571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b9vdg,Uid:ae4c6d9d-b177-405f-84c6-f30031c5dd17,Namespace:calico-system,Attempt:1,} returns sandbox id \"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd\"" May 13 00:30:58.335397 containerd[1465]: time="2025-05-13T00:30:58.335171933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:30:59.079321 containerd[1465]: time="2025-05-13T00:30:59.079239142Z" level=info msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" May 13 00:30:59.079321 containerd[1465]: time="2025-05-13T00:30:59.079282673Z" level=info msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" May 13 00:30:59.079762 containerd[1465]: time="2025-05-13T00:30:59.079240294Z" level=info msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" May 13 00:30:59.079762 containerd[1465]: time="2025-05-13T00:30:59.079288084Z" level=info msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" May 13 00:30:59.114256 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:45764.service - OpenSSH per-connection server daemon (10.0.0.1:45764). May 13 00:30:59.168201 sshd[4595]: Accepted publickey for core from 10.0.0.1 port 45764 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:59.170308 sshd[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:59.177680 systemd-logind[1449]: New session 15 of user core. May 13 00:30:59.183924 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.138 [INFO][4571] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.141 [INFO][4571] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" iface="eth0" netns="/var/run/netns/cni-b2dcb874-e5c9-a2f2-012e-6ddb51927433" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.142 [INFO][4571] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" iface="eth0" netns="/var/run/netns/cni-b2dcb874-e5c9-a2f2-012e-6ddb51927433" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.142 [INFO][4571] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" iface="eth0" netns="/var/run/netns/cni-b2dcb874-e5c9-a2f2-012e-6ddb51927433" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.142 [INFO][4571] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.142 [INFO][4571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.168 [INFO][4603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.168 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.169 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.175 [WARNING][4603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.175 [INFO][4603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.176 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.189989 containerd[1465]: 2025-05-13 00:30:59.180 [INFO][4571] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:30:59.193294 containerd[1465]: time="2025-05-13T00:30:59.192389163Z" level=info msg="TearDown network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" successfully" May 13 00:30:59.194436 systemd[1]: run-netns-cni\x2db2dcb874\x2de5c9\x2da2f2\x2d012e\x2d6ddb51927433.mount: Deactivated successfully. May 13 00:30:59.194739 containerd[1465]: time="2025-05-13T00:30:59.192422896Z" level=info msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" returns successfully" May 13 00:30:59.195507 containerd[1465]: time="2025-05-13T00:30:59.195454809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-hzpnr,Uid:fa5c31c3-8ed9-46d3-b4b2-83261ae6da34,Namespace:calico-apiserver,Attempt:1,}" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.146 [INFO][4551] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.147 [INFO][4551] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" iface="eth0" netns="/var/run/netns/cni-994d1d7b-b3ac-d1ea-a9de-d489ed34675d" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.152 [INFO][4551] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" iface="eth0" netns="/var/run/netns/cni-994d1d7b-b3ac-d1ea-a9de-d489ed34675d" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.153 [INFO][4551] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" iface="eth0" netns="/var/run/netns/cni-994d1d7b-b3ac-d1ea-a9de-d489ed34675d" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.153 [INFO][4551] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.153 [INFO][4551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.201 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.201 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.201 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.206 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.206 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.207 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.212470 containerd[1465]: 2025-05-13 00:30:59.210 [INFO][4551] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:30:59.213094 containerd[1465]: time="2025-05-13T00:30:59.213068792Z" level=info msg="TearDown network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" successfully" May 13 00:30:59.213157 containerd[1465]: time="2025-05-13T00:30:59.213142750Z" level=info msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" returns successfully" May 13 00:30:59.213928 containerd[1465]: time="2025-05-13T00:30:59.213891035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-vgcjq,Uid:bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5,Namespace:calico-apiserver,Attempt:1,}" May 13 00:30:59.218625 systemd[1]: run-netns-cni\x2d994d1d7b\x2db3ac\x2dd1ea\x2da9de\x2dd489ed34675d.mount: Deactivated successfully. May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.164 [INFO][4588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.164 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" iface="eth0" netns="/var/run/netns/cni-b003c034-44aa-4c1c-8833-df5ea61c7d60" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.165 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" iface="eth0" netns="/var/run/netns/cni-b003c034-44aa-4c1c-8833-df5ea61c7d60" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.165 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" iface="eth0" netns="/var/run/netns/cni-b003c034-44aa-4c1c-8833-df5ea61c7d60" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.165 [INFO][4588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.165 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.201 [INFO][4624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.202 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.207 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.215 [WARNING][4624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.215 [INFO][4624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.217 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.226165 containerd[1465]: 2025-05-13 00:30:59.221 [INFO][4588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:30:59.226859 containerd[1465]: time="2025-05-13T00:30:59.226824359Z" level=info msg="TearDown network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" successfully" May 13 00:30:59.226986 containerd[1465]: time="2025-05-13T00:30:59.226973028Z" level=info msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" returns successfully" May 13 00:30:59.227368 kubelet[2601]: E0513 00:30:59.227336 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.228858 systemd[1]: run-netns-cni\x2db003c034\x2d44aa\x2d4c1c\x2d8833\x2ddf5ea61c7d60.mount: Deactivated successfully. May 13 00:30:59.229911 containerd[1465]: time="2025-05-13T00:30:59.229878564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nntb9,Uid:34fdffa2-72d1-4818-a707-8a94261c161f,Namespace:kube-system,Attempt:1,}" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.153 [INFO][4561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.154 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" iface="eth0" netns="/var/run/netns/cni-88606e5d-23b4-db9f-7329-86b945e11b74" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.154 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" iface="eth0" netns="/var/run/netns/cni-88606e5d-23b4-db9f-7329-86b945e11b74" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.155 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" iface="eth0" netns="/var/run/netns/cni-88606e5d-23b4-db9f-7329-86b945e11b74" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.155 [INFO][4561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.156 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.202 [INFO][4614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.202 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.217 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.224 [WARNING][4614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.224 [INFO][4614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.226 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.234507 containerd[1465]: 2025-05-13 00:30:59.232 [INFO][4561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:30:59.235403 containerd[1465]: time="2025-05-13T00:30:59.235373451Z" level=info msg="TearDown network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" successfully" May 13 00:30:59.235403 containerd[1465]: time="2025-05-13T00:30:59.235397666Z" level=info msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" returns successfully" May 13 00:30:59.236179 containerd[1465]: time="2025-05-13T00:30:59.235922111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5574457795-mzvpl,Uid:3f24e1a4-b6cd-453b-b46b-da5f52cab0da,Namespace:calico-system,Attempt:1,}" May 13 00:30:59.350899 sshd[4595]: pam_unix(sshd:session): session closed for user core May 13 00:30:59.362918 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:45764.service: Deactivated successfully. May 13 00:30:59.365352 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:30:59.370934 systemd-networkd[1397]: cali21ce0049aea: Link UP May 13 00:30:59.371216 systemd-networkd[1397]: cali21ce0049aea: Gained carrier May 13 00:30:59.373401 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. May 13 00:30:59.387380 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:45778.service - OpenSSH per-connection server daemon (10.0.0.1:45778). May 13 00:30:59.390597 systemd-logind[1449]: Removed session 15. May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.254 [INFO][4640] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0 calico-apiserver-7df7d6b8db- calico-apiserver fa5c31c3-8ed9-46d3-b4b2-83261ae6da34 988 0 2025-05-13 00:30:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df7d6b8db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df7d6b8db-hzpnr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21ce0049aea [] []}} ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.255 [INFO][4640] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.303 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" HandleID="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.310 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" HandleID="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003085c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7df7d6b8db-hzpnr", "timestamp":"2025-05-13 00:30:59.30332113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.310 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.310 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.310 [INFO][4687] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.312 [INFO][4687] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.319 [INFO][4687] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.323 [INFO][4687] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.329 [INFO][4687] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.331 [INFO][4687] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.331 [INFO][4687] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.341 [INFO][4687] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.351 [INFO][4687] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.362 [INFO][4687] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.362 [INFO][4687] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" host="localhost" May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.362 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.395200 containerd[1465]: 2025-05-13 00:30:59.362 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" HandleID="k8s-pod-network.0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.366 [INFO][4640] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df7d6b8db-hzpnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ce0049aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.367 [INFO][4640] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.367 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21ce0049aea ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.371 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.371 [INFO][4640] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab", Pod:"calico-apiserver-7df7d6b8db-hzpnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ce0049aea", MAC:"8a:48:e7:c9:78:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.395907 containerd[1465]: 2025-05-13 00:30:59.386 [INFO][4640] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-hzpnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:30:59.415265 sshd[4734]: Accepted publickey for core from 10.0.0.1 port 45778 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:59.416942 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:59.421691 systemd-logind[1449]: New session 16 of user core. May 13 00:30:59.427887 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:30:59.434782 containerd[1465]: time="2025-05-13T00:30:59.434450158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:59.434782 containerd[1465]: time="2025-05-13T00:30:59.434512535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:59.434782 containerd[1465]: time="2025-05-13T00:30:59.434526511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.434782 containerd[1465]: time="2025-05-13T00:30:59.434649112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.442002 systemd-networkd[1397]: calidc2109739b4: Link UP May 13 00:30:59.442222 systemd-networkd[1397]: calidc2109739b4: Gained carrier May 13 00:30:59.451812 systemd-networkd[1397]: cali3dc0b9bbabb: Gained IPv6LL May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.289 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0 calico-apiserver-7df7d6b8db- calico-apiserver bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5 989 0 2025-05-13 00:30:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7df7d6b8db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7df7d6b8db-vgcjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc2109739b4 [] []}} ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.292 [INFO][4653] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.365 [INFO][4700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" HandleID="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.390 [INFO][4700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" HandleID="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048a5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7df7d6b8db-vgcjq", "timestamp":"2025-05-13 00:30:59.365360184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.390 [INFO][4700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.390 [INFO][4700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.390 [INFO][4700] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.395 [INFO][4700] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.400 [INFO][4700] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.405 [INFO][4700] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.407 [INFO][4700] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.409 [INFO][4700] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.409 [INFO][4700] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.411 [INFO][4700] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.414 [INFO][4700] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.419 [INFO][4700] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.421 [INFO][4700] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" host="localhost" May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.421 [INFO][4700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.457684 containerd[1465]: 2025-05-13 00:30:59.421 [INFO][4700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" HandleID="k8s-pod-network.e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.429 [INFO][4653] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7df7d6b8db-vgcjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc2109739b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.430 [INFO][4653] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.430 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc2109739b4 ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.441 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.443 [INFO][4653] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd", Pod:"calico-apiserver-7df7d6b8db-vgcjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc2109739b4", MAC:"f6:f9:21:2b:13:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.459025 containerd[1465]: 2025-05-13 00:30:59.454 [INFO][4653] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd" Namespace="calico-apiserver" Pod="calico-apiserver-7df7d6b8db-vgcjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:30:59.463875 systemd[1]: Started cri-containerd-0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab.scope - libcontainer container 0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab. May 13 00:30:59.475536 systemd-networkd[1397]: calibe45ef59c8d: Link UP May 13 00:30:59.476295 systemd-networkd[1397]: calibe45ef59c8d: Gained carrier May 13 00:30:59.496565 containerd[1465]: time="2025-05-13T00:30:59.495194231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:59.496565 containerd[1465]: time="2025-05-13T00:30:59.495283630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:59.496565 containerd[1465]: time="2025-05-13T00:30:59.495297977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.496565 containerd[1465]: time="2025-05-13T00:30:59.495409626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.496911 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.323 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0 coredns-7db6d8ff4d- kube-system 34fdffa2-72d1-4818-a707-8a94261c161f 991 0 2025-05-13 00:30:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nntb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe45ef59c8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.324 [INFO][4673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.368 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" HandleID="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.395 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" HandleID="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f56c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nntb9", "timestamp":"2025-05-13 00:30:59.368056817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.395 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.422 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.422 [INFO][4716] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.425 [INFO][4716] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.429 [INFO][4716] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.444 [INFO][4716] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.448 [INFO][4716] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.453 [INFO][4716] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.453 [INFO][4716] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.456 [INFO][4716] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026 May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.462 [INFO][4716] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.467 [INFO][4716] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.467 [INFO][4716] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" host="localhost" May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.467 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.499261 containerd[1465]: 2025-05-13 00:30:59.467 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" HandleID="k8s-pod-network.03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.471 [INFO][4673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"34fdffa2-72d1-4818-a707-8a94261c161f", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nntb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe45ef59c8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.471 [INFO][4673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.471 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe45ef59c8d ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.477 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.479 [INFO][4673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"34fdffa2-72d1-4818-a707-8a94261c161f", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026", Pod:"coredns-7db6d8ff4d-nntb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe45ef59c8d", MAC:"62:37:30:b7:df:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.499745 containerd[1465]: 2025-05-13 00:30:59.491 [INFO][4673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nntb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:30:59.515826 systemd-networkd[1397]: calic1d050960e4: Link UP May 13 00:30:59.517018 systemd-networkd[1397]: calic1d050960e4: Gained carrier May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.396 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0 calico-kube-controllers-5574457795- calico-system 3f24e1a4-b6cd-453b-b46b-da5f52cab0da 990 0 2025-05-13 00:30:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5574457795 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5574457795-mzvpl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1d050960e4 [] []}} ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.396 [INFO][4706] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.431 [INFO][4746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" HandleID="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.444 [INFO][4746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" HandleID="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019d270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5574457795-mzvpl", "timestamp":"2025-05-13 00:30:59.431247595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.444 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.467 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.468 [INFO][4746] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.470 [INFO][4746] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.476 [INFO][4746] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.481 [INFO][4746] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.485 [INFO][4746] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.492 [INFO][4746] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.492 [INFO][4746] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.494 [INFO][4746] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3 May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.499 [INFO][4746] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.505 [INFO][4746] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.505 [INFO][4746] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" host="localhost" May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.505 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:59.537994 containerd[1465]: 2025-05-13 00:30:59.505 [INFO][4746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" HandleID="k8s-pod-network.7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.510 [INFO][4706] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0", GenerateName:"calico-kube-controllers-5574457795-", Namespace:"calico-system", SelfLink:"", UID:"3f24e1a4-b6cd-453b-b46b-da5f52cab0da", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5574457795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5574457795-mzvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1d050960e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.510 [INFO][4706] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.510 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d050960e4 ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.517 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.518 [INFO][4706] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0", GenerateName:"calico-kube-controllers-5574457795-", Namespace:"calico-system", SelfLink:"", UID:"3f24e1a4-b6cd-453b-b46b-da5f52cab0da", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5574457795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3", Pod:"calico-kube-controllers-5574457795-mzvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1d050960e4", MAC:"de:b9:10:cf:92:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:30:59.538821 containerd[1465]: 2025-05-13 00:30:59.526 [INFO][4706] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3" Namespace="calico-system" Pod="calico-kube-controllers-5574457795-mzvpl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:30:59.540075 systemd[1]: Started cri-containerd-e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd.scope - libcontainer container e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd. May 13 00:30:59.556146 containerd[1465]: time="2025-05-13T00:30:59.555768616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:59.556356 containerd[1465]: time="2025-05-13T00:30:59.556165762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:59.557796 containerd[1465]: time="2025-05-13T00:30:59.557669436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.557907 containerd[1465]: time="2025-05-13T00:30:59.557830898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.570173 containerd[1465]: time="2025-05-13T00:30:59.570130352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-hzpnr,Uid:fa5c31c3-8ed9-46d3-b4b2-83261ae6da34,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab\"" May 13 00:30:59.571060 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:59.583293 containerd[1465]: time="2025-05-13T00:30:59.582980109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:59.583293 containerd[1465]: time="2025-05-13T00:30:59.583035854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:59.583293 containerd[1465]: time="2025-05-13T00:30:59.583063636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.583293 containerd[1465]: time="2025-05-13T00:30:59.583145960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:59.592090 systemd[1]: Started cri-containerd-03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026.scope - libcontainer container 03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026. May 13 00:30:59.606105 systemd[1]: Started cri-containerd-7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3.scope - libcontainer container 7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3. May 13 00:30:59.613533 containerd[1465]: time="2025-05-13T00:30:59.613492710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7df7d6b8db-vgcjq,Uid:bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd\"" May 13 00:30:59.621848 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:59.624277 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:30:59.649621 containerd[1465]: time="2025-05-13T00:30:59.649511358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nntb9,Uid:34fdffa2-72d1-4818-a707-8a94261c161f,Namespace:kube-system,Attempt:1,} returns sandbox id \"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026\"" May 13 00:30:59.651976 kubelet[2601]: E0513 00:30:59.651945 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:59.655646 containerd[1465]: time="2025-05-13T00:30:59.655491907Z" level=info msg="CreateContainer within sandbox \"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:30:59.656104 containerd[1465]: time="2025-05-13T00:30:59.656085191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5574457795-mzvpl,Uid:3f24e1a4-b6cd-453b-b46b-da5f52cab0da,Namespace:calico-system,Attempt:1,} returns sandbox id \"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3\"" May 13 00:30:59.670495 containerd[1465]: time="2025-05-13T00:30:59.670451867Z" level=info msg="CreateContainer within sandbox \"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17cbc134163e305d6682faece58073b4210b7ec3bd0d8cea83a95764ea0d9c44\"" May 13 00:30:59.671106 containerd[1465]: time="2025-05-13T00:30:59.670882334Z" level=info msg="StartContainer for \"17cbc134163e305d6682faece58073b4210b7ec3bd0d8cea83a95764ea0d9c44\"" May 13 00:30:59.678568 sshd[4734]: pam_unix(sshd:session): session closed for user core May 13 00:30:59.686662 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:45778.service: Deactivated successfully. May 13 00:30:59.688741 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:30:59.690773 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. May 13 00:30:59.701866 systemd[1]: Started cri-containerd-17cbc134163e305d6682faece58073b4210b7ec3bd0d8cea83a95764ea0d9c44.scope - libcontainer container 17cbc134163e305d6682faece58073b4210b7ec3bd0d8cea83a95764ea0d9c44. May 13 00:30:59.703239 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:45790.service - OpenSSH per-connection server daemon (10.0.0.1:45790). May 13 00:30:59.703980 systemd-logind[1449]: Removed session 16. May 13 00:30:59.731380 containerd[1465]: time="2025-05-13T00:30:59.731341493Z" level=info msg="StartContainer for \"17cbc134163e305d6682faece58073b4210b7ec3bd0d8cea83a95764ea0d9c44\" returns successfully" May 13 00:30:59.742310 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 45790 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:30:59.744113 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:59.748452 systemd-logind[1449]: New session 17 of user core. May 13 00:30:59.753831 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:30:59.815064 systemd[1]: run-netns-cni\x2d88606e5d\x2d23b4\x2ddb9f\x2d7329\x2d86b945e11b74.mount: Deactivated successfully. May 13 00:31:00.177670 containerd[1465]: time="2025-05-13T00:31:00.177619657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:00.178254 containerd[1465]: time="2025-05-13T00:31:00.178199856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 00:31:00.179361 containerd[1465]: time="2025-05-13T00:31:00.179331361Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:00.181237 containerd[1465]: time="2025-05-13T00:31:00.181209426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:00.181852 containerd[1465]: time="2025-05-13T00:31:00.181812759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.846607854s" May 13 00:31:00.181885 containerd[1465]: time="2025-05-13T00:31:00.181850981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 00:31:00.182951 containerd[1465]: time="2025-05-13T00:31:00.182927622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:31:00.184250 containerd[1465]: time="2025-05-13T00:31:00.184128787Z" level=info msg="CreateContainer within sandbox \"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:31:00.198752 containerd[1465]: time="2025-05-13T00:31:00.198691428Z" level=info msg="CreateContainer within sandbox \"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157\"" May 13 00:31:00.199192 containerd[1465]: time="2025-05-13T00:31:00.199161702Z" level=info msg="StartContainer for \"3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157\"" May 13 00:31:00.235846 systemd[1]: Started cri-containerd-3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157.scope - libcontainer container 3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157. May 13 00:31:00.289536 containerd[1465]: time="2025-05-13T00:31:00.289492516Z" level=info msg="StartContainer for \"3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157\" returns successfully" May 13 00:31:00.311901 kubelet[2601]: E0513 00:31:00.311618 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:00.319405 kubelet[2601]: I0513 00:31:00.319073 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nntb9" podStartSLOduration=37.319058496 podStartE2EDuration="37.319058496s" podCreationTimestamp="2025-05-13 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:31:00.318637786 +0000 UTC m=+52.322281907" watchObservedRunningTime="2025-05-13 00:31:00.319058496 +0000 UTC m=+52.322702617" May 13 00:31:00.808116 systemd[1]: run-containerd-runc-k8s.io-3b938c49ad651bad9e39ce3fb277b1165848461d4d066f27f5571527a8f42157-runc.6j31uI.mount: Deactivated successfully. May 13 00:31:01.049857 systemd-networkd[1397]: cali21ce0049aea: Gained IPv6LL May 13 00:31:01.050665 systemd-networkd[1397]: calidc2109739b4: Gained IPv6LL May 13 00:31:01.050990 systemd-networkd[1397]: calic1d050960e4: Gained IPv6LL May 13 00:31:01.178022 systemd-networkd[1397]: calibe45ef59c8d: Gained IPv6LL May 13 00:31:01.313487 kubelet[2601]: E0513 00:31:01.313447 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:01.386776 sshd[4986]: pam_unix(sshd:session): session closed for user core May 13 00:31:01.398877 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:45790.service: Deactivated successfully. May 13 00:31:01.400807 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:31:01.402958 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. May 13 00:31:01.413053 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:45800.service - OpenSSH per-connection server daemon (10.0.0.1:45800). May 13 00:31:01.414290 systemd-logind[1449]: Removed session 17. May 13 00:31:01.440859 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 45800 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:31:01.442652 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:01.446582 systemd-logind[1449]: New session 18 of user core. May 13 00:31:01.453805 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:31:01.670267 sshd[5075]: pam_unix(sshd:session): session closed for user core May 13 00:31:01.677973 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:45800.service: Deactivated successfully. May 13 00:31:01.680047 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:31:01.680942 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. May 13 00:31:01.690167 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:45812.service - OpenSSH per-connection server daemon (10.0.0.1:45812). May 13 00:31:01.691385 systemd-logind[1449]: Removed session 18. May 13 00:31:01.724219 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 45812 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:31:01.726190 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:01.736926 systemd-logind[1449]: New session 19 of user core. May 13 00:31:01.742312 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:31:01.859410 sshd[5087]: pam_unix(sshd:session): session closed for user core May 13 00:31:01.864077 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:45812.service: Deactivated successfully. May 13 00:31:01.866275 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:31:01.866942 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. May 13 00:31:01.867802 systemd-logind[1449]: Removed session 19. May 13 00:31:02.317232 kubelet[2601]: E0513 00:31:02.317201 2601 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:31:03.372477 containerd[1465]: time="2025-05-13T00:31:03.372423320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:03.373285 containerd[1465]: time="2025-05-13T00:31:03.373221718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 00:31:03.374364 containerd[1465]: time="2025-05-13T00:31:03.374337703Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:03.376324 containerd[1465]: time="2025-05-13T00:31:03.376295068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:03.376940 containerd[1465]: time="2025-05-13T00:31:03.376902408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.193945882s" May 13 00:31:03.376974 containerd[1465]: time="2025-05-13T00:31:03.376939337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:31:03.378008 containerd[1465]: time="2025-05-13T00:31:03.377967757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:31:03.379030 containerd[1465]: time="2025-05-13T00:31:03.378994936Z" level=info msg="CreateContainer within sandbox \"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:31:03.392575 containerd[1465]: time="2025-05-13T00:31:03.392525265Z" level=info msg="CreateContainer within sandbox \"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6fbacb39a63f4d1f5ad7e9dab61bc425d205278a66ca8c6b90f34d2a724b1fc9\"" May 13 00:31:03.393006 containerd[1465]: time="2025-05-13T00:31:03.392965110Z" level=info msg="StartContainer for \"6fbacb39a63f4d1f5ad7e9dab61bc425d205278a66ca8c6b90f34d2a724b1fc9\"" May 13 00:31:03.422834 systemd[1]: Started cri-containerd-6fbacb39a63f4d1f5ad7e9dab61bc425d205278a66ca8c6b90f34d2a724b1fc9.scope - libcontainer container 6fbacb39a63f4d1f5ad7e9dab61bc425d205278a66ca8c6b90f34d2a724b1fc9. May 13 00:31:03.461426 containerd[1465]: time="2025-05-13T00:31:03.461386670Z" level=info msg="StartContainer for \"6fbacb39a63f4d1f5ad7e9dab61bc425d205278a66ca8c6b90f34d2a724b1fc9\" returns successfully" May 13 00:31:03.918865 containerd[1465]: time="2025-05-13T00:31:03.918809362Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:03.919795 containerd[1465]: time="2025-05-13T00:31:03.919727957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 00:31:03.922271 containerd[1465]: time="2025-05-13T00:31:03.922224693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 544.212211ms" May 13 00:31:03.922271 containerd[1465]: time="2025-05-13T00:31:03.922261883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:31:03.927938 containerd[1465]: time="2025-05-13T00:31:03.927876072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:31:03.928990 containerd[1465]: time="2025-05-13T00:31:03.928957201Z" level=info msg="CreateContainer within sandbox \"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:31:03.945137 containerd[1465]: time="2025-05-13T00:31:03.945086059Z" level=info msg="CreateContainer within sandbox \"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"666844107af1196b6faa1a7b24f74a44b6dd7b08f8857990284930faf8a28d94\"" May 13 00:31:03.945775 containerd[1465]: time="2025-05-13T00:31:03.945741810Z" level=info msg="StartContainer for \"666844107af1196b6faa1a7b24f74a44b6dd7b08f8857990284930faf8a28d94\"" May 13 00:31:03.972899 systemd[1]: Started cri-containerd-666844107af1196b6faa1a7b24f74a44b6dd7b08f8857990284930faf8a28d94.scope - libcontainer container 666844107af1196b6faa1a7b24f74a44b6dd7b08f8857990284930faf8a28d94. May 13 00:31:04.132802 containerd[1465]: time="2025-05-13T00:31:04.132666708Z" level=info msg="StartContainer for \"666844107af1196b6faa1a7b24f74a44b6dd7b08f8857990284930faf8a28d94\" returns successfully" May 13 00:31:04.335345 kubelet[2601]: I0513 00:31:04.334914 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7df7d6b8db-vgcjq" podStartSLOduration=31.027774369 podStartE2EDuration="35.334891351s" podCreationTimestamp="2025-05-13 00:30:29 +0000 UTC" firstStartedPulling="2025-05-13 00:30:59.616005287 +0000 UTC m=+51.619649408" lastFinishedPulling="2025-05-13 00:31:03.923122269 +0000 UTC m=+55.926766390" observedRunningTime="2025-05-13 00:31:04.334791043 +0000 UTC m=+56.338435164" watchObservedRunningTime="2025-05-13 00:31:04.334891351 +0000 UTC m=+56.338535502" May 13 00:31:04.350252 kubelet[2601]: I0513 00:31:04.348900 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7df7d6b8db-hzpnr" podStartSLOduration=31.542924496 podStartE2EDuration="35.348881822s" podCreationTimestamp="2025-05-13 00:30:29 +0000 UTC" firstStartedPulling="2025-05-13 00:30:59.571773136 +0000 UTC m=+51.575417257" lastFinishedPulling="2025-05-13 00:31:03.377730462 +0000 UTC m=+55.381374583" observedRunningTime="2025-05-13 00:31:04.346853454 +0000 UTC m=+56.350497585" watchObservedRunningTime="2025-05-13 00:31:04.348881822 +0000 UTC m=+56.352525943" May 13 00:31:05.326438 kubelet[2601]: I0513 00:31:05.326393 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:31:06.719461 containerd[1465]: time="2025-05-13T00:31:06.719390070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.720217 containerd[1465]: time="2025-05-13T00:31:06.720175665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 00:31:06.721378 containerd[1465]: time="2025-05-13T00:31:06.721350209Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.723615 containerd[1465]: time="2025-05-13T00:31:06.723570156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:06.724145 containerd[1465]: time="2025-05-13T00:31:06.724103035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.796180986s" May 13 00:31:06.724145 containerd[1465]: time="2025-05-13T00:31:06.724134574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 00:31:06.725587 containerd[1465]: time="2025-05-13T00:31:06.725563537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:31:06.732428 containerd[1465]: time="2025-05-13T00:31:06.732375873Z" level=info msg="CreateContainer within sandbox \"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:31:06.747801 containerd[1465]: time="2025-05-13T00:31:06.747745180Z" level=info msg="CreateContainer within sandbox \"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bc8379e89d1761ae785e796b691d25e151878305281178f7eaa813a3ec049889\"" May 13 00:31:06.748298 containerd[1465]: time="2025-05-13T00:31:06.748265155Z" level=info msg="StartContainer for \"bc8379e89d1761ae785e796b691d25e151878305281178f7eaa813a3ec049889\"" May 13 00:31:06.799883 systemd[1]: Started cri-containerd-bc8379e89d1761ae785e796b691d25e151878305281178f7eaa813a3ec049889.scope - libcontainer container bc8379e89d1761ae785e796b691d25e151878305281178f7eaa813a3ec049889. May 13 00:31:06.845483 containerd[1465]: time="2025-05-13T00:31:06.845343174Z" level=info msg="StartContainer for \"bc8379e89d1761ae785e796b691d25e151878305281178f7eaa813a3ec049889\" returns successfully" May 13 00:31:06.872237 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:45814.service - OpenSSH per-connection server daemon (10.0.0.1:45814). May 13 00:31:06.933614 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 45814 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:31:06.935482 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:06.940138 systemd-logind[1449]: New session 20 of user core. May 13 00:31:06.947880 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:31:07.063608 sshd[5242]: pam_unix(sshd:session): session closed for user core May 13 00:31:07.067992 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:45814.service: Deactivated successfully. May 13 00:31:07.070021 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:31:07.070640 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. May 13 00:31:07.071459 systemd-logind[1449]: Removed session 20. May 13 00:31:07.417983 kubelet[2601]: I0513 00:31:07.417910 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5574457795-mzvpl" podStartSLOduration=30.349710508 podStartE2EDuration="37.417857635s" podCreationTimestamp="2025-05-13 00:30:30 +0000 UTC" firstStartedPulling="2025-05-13 00:30:59.657214772 +0000 UTC m=+51.660858893" lastFinishedPulling="2025-05-13 00:31:06.725361899 +0000 UTC m=+58.729006020" observedRunningTime="2025-05-13 00:31:07.349746461 +0000 UTC m=+59.353390582" watchObservedRunningTime="2025-05-13 00:31:07.417857635 +0000 UTC m=+59.421501756" May 13 00:31:08.069117 containerd[1465]: time="2025-05-13T00:31:08.069061889Z" level=info msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.104 [WARNING][5294] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab", Pod:"calico-apiserver-7df7d6b8db-hzpnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ce0049aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.105 [INFO][5294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.105 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" iface="eth0" netns="" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.105 [INFO][5294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.105 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.131 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.132 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.132 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.137 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.137 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.138 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.143352 containerd[1465]: 2025-05-13 00:31:08.140 [INFO][5294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.144098 containerd[1465]: time="2025-05-13T00:31:08.143378374Z" level=info msg="TearDown network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" successfully" May 13 00:31:08.144098 containerd[1465]: time="2025-05-13T00:31:08.143406988Z" level=info msg="StopPodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" returns successfully" May 13 00:31:08.144098 containerd[1465]: time="2025-05-13T00:31:08.143960276Z" level=info msg="RemovePodSandbox for \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" May 13 00:31:08.146543 containerd[1465]: time="2025-05-13T00:31:08.146511324Z" level=info msg="Forcibly stopping sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\"" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.180 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa5c31c3-8ed9-46d3-b4b2-83261ae6da34", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ef0fe7725b2ebbbb8457b0691130f2ca695042d0359cea211c2695e0b0e8cab", Pod:"calico-apiserver-7df7d6b8db-hzpnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21ce0049aea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.180 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.180 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" iface="eth0" netns="" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.180 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.180 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.201 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.201 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.201 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.206 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.206 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" HandleID="k8s-pod-network.93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--hzpnr-eth0" May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.207 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.213821 containerd[1465]: 2025-05-13 00:31:08.210 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e" May 13 00:31:08.213821 containerd[1465]: time="2025-05-13T00:31:08.213077644Z" level=info msg="TearDown network for sandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" successfully" May 13 00:31:08.230257 containerd[1465]: time="2025-05-13T00:31:08.230213985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:08.230392 containerd[1465]: time="2025-05-13T00:31:08.230287653Z" level=info msg="RemovePodSandbox \"93d3d1892000cc9f636ee569ea4cb5a4b39bb753774e331ffc2088f95cce997e\" returns successfully" May 13 00:31:08.230835 containerd[1465]: time="2025-05-13T00:31:08.230798452Z" level=info msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.266 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9vdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae4c6d9d-b177-405f-84c6-f30031c5dd17", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd", Pod:"csi-node-driver-b9vdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc0b9bbabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.267 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.267 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" iface="eth0" netns="" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.267 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.267 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.287 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.287 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.287 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.292 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.292 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.293 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.298398 containerd[1465]: 2025-05-13 00:31:08.296 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.298852 containerd[1465]: time="2025-05-13T00:31:08.298434569Z" level=info msg="TearDown network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" successfully" May 13 00:31:08.298852 containerd[1465]: time="2025-05-13T00:31:08.298463834Z" level=info msg="StopPodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" returns successfully" May 13 00:31:08.299027 containerd[1465]: time="2025-05-13T00:31:08.299001353Z" level=info msg="RemovePodSandbox for \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" May 13 00:31:08.299070 containerd[1465]: time="2025-05-13T00:31:08.299035187Z" level=info msg="Forcibly stopping sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\"" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.335 [WARNING][5392] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b9vdg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae4c6d9d-b177-405f-84c6-f30031c5dd17", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd", Pod:"csi-node-driver-b9vdg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3dc0b9bbabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.335 [INFO][5392] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.335 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" iface="eth0" netns="" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.335 [INFO][5392] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.335 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.357 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.357 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.357 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.362 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.362 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" HandleID="k8s-pod-network.f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" Workload="localhost-k8s-csi--node--driver--b9vdg-eth0" May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.363 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.368659 containerd[1465]: 2025-05-13 00:31:08.366 [INFO][5392] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2" May 13 00:31:08.369052 containerd[1465]: time="2025-05-13T00:31:08.368691646Z" level=info msg="TearDown network for sandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" successfully" May 13 00:31:08.374413 containerd[1465]: time="2025-05-13T00:31:08.374340608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:08.374413 containerd[1465]: time="2025-05-13T00:31:08.374414036Z" level=info msg="RemovePodSandbox \"f0e3ffa1dd6a2c0d11f61af8eba86adeb95721a45a151cc02b208ba88feb7ff2\" returns successfully" May 13 00:31:08.375059 containerd[1465]: time="2025-05-13T00:31:08.375025503Z" level=info msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.410 [WARNING][5422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cca342e9-5897-4c4b-a21f-fa72a6bbaed0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05", Pod:"coredns-7db6d8ff4d-bdk6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4da8a03272a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.410 [INFO][5422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.410 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" iface="eth0" netns="" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.410 [INFO][5422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.410 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.433 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.433 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.433 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.438 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.438 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.439 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.444805 containerd[1465]: 2025-05-13 00:31:08.442 [INFO][5422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.445475 containerd[1465]: time="2025-05-13T00:31:08.444832915Z" level=info msg="TearDown network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" successfully" May 13 00:31:08.445475 containerd[1465]: time="2025-05-13T00:31:08.444861298Z" level=info msg="StopPodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" returns successfully" May 13 00:31:08.445475 containerd[1465]: time="2025-05-13T00:31:08.445387526Z" level=info msg="RemovePodSandbox for \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" May 13 00:31:08.445475 containerd[1465]: time="2025-05-13T00:31:08.445421831Z" level=info msg="Forcibly stopping sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\"" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.481 [WARNING][5452] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cca342e9-5897-4c4b-a21f-fa72a6bbaed0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60d3e6ea44daf2ffb7647f82573962c76a7f4d1d0e30f208031ed47b6cbba05", Pod:"coredns-7db6d8ff4d-bdk6n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4da8a03272a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.482 [INFO][5452] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.482 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" iface="eth0" netns="" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.482 [INFO][5452] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.482 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.505 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.505 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.505 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.510 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.510 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" HandleID="k8s-pod-network.76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" Workload="localhost-k8s-coredns--7db6d8ff4d--bdk6n-eth0" May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.511 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.517316 containerd[1465]: 2025-05-13 00:31:08.514 [INFO][5452] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c" May 13 00:31:08.517879 containerd[1465]: time="2025-05-13T00:31:08.517366975Z" level=info msg="TearDown network for sandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" successfully" May 13 00:31:08.521838 containerd[1465]: time="2025-05-13T00:31:08.521776972Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:08.521959 containerd[1465]: time="2025-05-13T00:31:08.521862222Z" level=info msg="RemovePodSandbox \"76ba1dee43b25c1f50ae110f899e5192ad75e7ca23c820406ca7fcb5d202611c\" returns successfully" May 13 00:31:08.522431 containerd[1465]: time="2025-05-13T00:31:08.522387107Z" level=info msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.558 [WARNING][5482] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"34fdffa2-72d1-4818-a707-8a94261c161f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026", Pod:"coredns-7db6d8ff4d-nntb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe45ef59c8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.558 [INFO][5482] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.558 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" iface="eth0" netns="" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.558 [INFO][5482] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.558 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.578 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.578 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.578 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.583 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.583 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.585 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.589968 containerd[1465]: 2025-05-13 00:31:08.587 [INFO][5482] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.590452 containerd[1465]: time="2025-05-13T00:31:08.590009088Z" level=info msg="TearDown network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" successfully" May 13 00:31:08.590452 containerd[1465]: time="2025-05-13T00:31:08.590040988Z" level=info msg="StopPodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" returns successfully" May 13 00:31:08.590631 containerd[1465]: time="2025-05-13T00:31:08.590589788Z" level=info msg="RemovePodSandbox for \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" May 13 00:31:08.590660 containerd[1465]: time="2025-05-13T00:31:08.590629843Z" level=info msg="Forcibly stopping sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\"" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.628 [WARNING][5513] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"34fdffa2-72d1-4818-a707-8a94261c161f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03b49730663c8c6c65df5e5c9ef455064d6ba99ce014789eba057ef3c1371026", Pod:"coredns-7db6d8ff4d-nntb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe45ef59c8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.628 [INFO][5513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.628 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" iface="eth0" netns="" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.628 [INFO][5513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.628 [INFO][5513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.650 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.651 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.651 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.656 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.656 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" HandleID="k8s-pod-network.041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" Workload="localhost-k8s-coredns--7db6d8ff4d--nntb9-eth0" May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.657 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.662844 containerd[1465]: 2025-05-13 00:31:08.660 [INFO][5513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202" May 13 00:31:08.662844 containerd[1465]: time="2025-05-13T00:31:08.662799490Z" level=info msg="TearDown network for sandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" successfully" May 13 00:31:08.667636 containerd[1465]: time="2025-05-13T00:31:08.667601782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:08.667693 containerd[1465]: time="2025-05-13T00:31:08.667650634Z" level=info msg="RemovePodSandbox \"041f1379d010e4fe14b5afb6f1c314cf0f245f17acc47e3a9621b6e07f69b202\" returns successfully" May 13 00:31:08.668160 containerd[1465]: time="2025-05-13T00:31:08.668127088Z" level=info msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.706 [WARNING][5544] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0", GenerateName:"calico-kube-controllers-5574457795-", Namespace:"calico-system", SelfLink:"", UID:"3f24e1a4-b6cd-453b-b46b-da5f52cab0da", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5574457795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3", Pod:"calico-kube-controllers-5574457795-mzvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1d050960e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.706 [INFO][5544] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.706 [INFO][5544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" iface="eth0" netns="" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.706 [INFO][5544] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.706 [INFO][5544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.728 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.728 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.728 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.733 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.733 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.735 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.739598 containerd[1465]: 2025-05-13 00:31:08.737 [INFO][5544] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.740212 containerd[1465]: time="2025-05-13T00:31:08.740153135Z" level=info msg="TearDown network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" successfully" May 13 00:31:08.740212 containerd[1465]: time="2025-05-13T00:31:08.740197378Z" level=info msg="StopPodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" returns successfully" May 13 00:31:08.742662 containerd[1465]: time="2025-05-13T00:31:08.742613201Z" level=info msg="RemovePodSandbox for \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" May 13 00:31:08.742662 containerd[1465]: time="2025-05-13T00:31:08.742663265Z" level=info msg="Forcibly stopping sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\"" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.779 [WARNING][5574] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0", GenerateName:"calico-kube-controllers-5574457795-", Namespace:"calico-system", SelfLink:"", UID:"3f24e1a4-b6cd-453b-b46b-da5f52cab0da", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5574457795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7473c030a1d96b3642b138c3dc5095210384d165322b1a50b1ef918b98a8b7b3", Pod:"calico-kube-controllers-5574457795-mzvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1d050960e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.779 [INFO][5574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.779 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" iface="eth0" netns="" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.779 [INFO][5574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.779 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.798 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.798 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.798 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.803 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.803 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" HandleID="k8s-pod-network.cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" Workload="localhost-k8s-calico--kube--controllers--5574457795--mzvpl-eth0" May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.804 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:08.810045 containerd[1465]: 2025-05-13 00:31:08.807 [INFO][5574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1" May 13 00:31:08.810593 containerd[1465]: time="2025-05-13T00:31:08.810090100Z" level=info msg="TearDown network for sandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" successfully" May 13 00:31:09.292525 containerd[1465]: time="2025-05-13T00:31:09.292463136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:09.292525 containerd[1465]: time="2025-05-13T00:31:09.292530132Z" level=info msg="RemovePodSandbox \"cf504c80d9c2742f4533bb88aee4e0042c364525cad9d9365761c9e6a6a909c1\" returns successfully" May 13 00:31:09.293575 containerd[1465]: time="2025-05-13T00:31:09.293519018Z" level=info msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.337 [WARNING][5607] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd", Pod:"calico-apiserver-7df7d6b8db-vgcjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc2109739b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.338 [INFO][5607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.338 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" iface="eth0" netns="" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.338 [INFO][5607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.338 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.378 [INFO][5615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.378 [INFO][5615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.378 [INFO][5615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.382 [WARNING][5615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.382 [INFO][5615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.384 [INFO][5615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:09.389768 containerd[1465]: 2025-05-13 00:31:09.386 [INFO][5607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.390531 containerd[1465]: time="2025-05-13T00:31:09.389805950Z" level=info msg="TearDown network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" successfully" May 13 00:31:09.390531 containerd[1465]: time="2025-05-13T00:31:09.389833281Z" level=info msg="StopPodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" returns successfully" May 13 00:31:09.390646 containerd[1465]: time="2025-05-13T00:31:09.390597485Z" level=info msg="RemovePodSandbox for \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" May 13 00:31:09.390646 containerd[1465]: time="2025-05-13T00:31:09.390623143Z" level=info msg="Forcibly stopping sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\"" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.428 [WARNING][5638] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0", GenerateName:"calico-apiserver-7df7d6b8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc0f314a-a9ef-4b3b-ac1b-b2957bf09ae5", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7df7d6b8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4ca8cc3811a5340f9d234fc0d59eb981870792df6a2670a615686fb11c5a2dd", Pod:"calico-apiserver-7df7d6b8db-vgcjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc2109739b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.428 [INFO][5638] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.428 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" iface="eth0" netns="" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.428 [INFO][5638] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.428 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.452 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.452 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.452 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.458 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.458 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" HandleID="k8s-pod-network.cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" Workload="localhost-k8s-calico--apiserver--7df7d6b8db--vgcjq-eth0" May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.459 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:31:09.463845 containerd[1465]: 2025-05-13 00:31:09.461 [INFO][5638] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046" May 13 00:31:09.464235 containerd[1465]: time="2025-05-13T00:31:09.463908549Z" level=info msg="TearDown network for sandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" successfully" May 13 00:31:09.503838 containerd[1465]: time="2025-05-13T00:31:09.503773043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:31:09.503933 containerd[1465]: time="2025-05-13T00:31:09.503857862Z" level=info msg="RemovePodSandbox \"cbb8672036b3e30c674e1f366fb54abfe14354106f88c953de77363c70c15046\" returns successfully" May 13 00:31:09.507644 containerd[1465]: time="2025-05-13T00:31:09.507591328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.513146 containerd[1465]: time="2025-05-13T00:31:09.512130457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 00:31:09.514409 containerd[1465]: time="2025-05-13T00:31:09.514257950Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.521812 containerd[1465]: time="2025-05-13T00:31:09.521758656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:31:09.522630 containerd[1465]: time="2025-05-13T00:31:09.522580789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.796986475s" May 13 00:31:09.522630 containerd[1465]: time="2025-05-13T00:31:09.522620093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 00:31:09.524736 containerd[1465]: time="2025-05-13T00:31:09.524684938Z" level=info msg="CreateContainer within sandbox \"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:31:09.658441 containerd[1465]: time="2025-05-13T00:31:09.658375335Z" level=info msg="CreateContainer within sandbox \"b34c7a3b19f70821f55d36cebc60b861252411a2fe648c666629738ccebdd1cd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"095f3e1007e9815e7aada7c4ac81cfbac229e9b655f929dc8500c5c71a1440bb\"" May 13 00:31:09.658924 containerd[1465]: time="2025-05-13T00:31:09.658866297Z" level=info msg="StartContainer for \"095f3e1007e9815e7aada7c4ac81cfbac229e9b655f929dc8500c5c71a1440bb\"" May 13 00:31:09.691835 systemd[1]: Started cri-containerd-095f3e1007e9815e7aada7c4ac81cfbac229e9b655f929dc8500c5c71a1440bb.scope - libcontainer container 095f3e1007e9815e7aada7c4ac81cfbac229e9b655f929dc8500c5c71a1440bb. May 13 00:31:09.851217 containerd[1465]: time="2025-05-13T00:31:09.851153895Z" level=info msg="StartContainer for \"095f3e1007e9815e7aada7c4ac81cfbac229e9b655f929dc8500c5c71a1440bb\" returns successfully" May 13 00:31:10.145524 kubelet[2601]: I0513 00:31:10.145472 2601 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:31:10.145524 kubelet[2601]: I0513 00:31:10.145515 2601 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:31:10.368192 kubelet[2601]: I0513 00:31:10.368079 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b9vdg" podStartSLOduration=30.179513581 podStartE2EDuration="41.368054412s" podCreationTimestamp="2025-05-13 00:30:29 +0000 UTC" firstStartedPulling="2025-05-13 00:30:58.334960345 +0000 UTC m=+50.338604466" lastFinishedPulling="2025-05-13 00:31:09.523501176 +0000 UTC m=+61.527145297" observedRunningTime="2025-05-13 00:31:10.36758422 +0000 UTC m=+62.371228341" watchObservedRunningTime="2025-05-13 00:31:10.368054412 +0000 UTC m=+62.371698543" May 13 00:31:12.074405 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:39874.service - OpenSSH per-connection server daemon (10.0.0.1:39874). May 13 00:31:12.124986 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 39874 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:31:12.126643 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:12.130530 systemd-logind[1449]: New session 21 of user core. May 13 00:31:12.138836 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:31:12.249939 sshd[5695]: pam_unix(sshd:session): session closed for user core May 13 00:31:12.254223 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:39874.service: Deactivated successfully. May 13 00:31:12.256051 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:31:12.256860 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. May 13 00:31:12.257762 systemd-logind[1449]: Removed session 21. May 13 00:31:15.991846 kubelet[2601]: I0513 00:31:15.991797 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:31:17.261871 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:39884.service - OpenSSH per-connection server daemon (10.0.0.1:39884). May 13 00:31:17.292119 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 39884 ssh2: RSA SHA256:C8EB+qIBpDYbEudkwL+hXgYkYPlLQFWTQCbVJQyY2dw May 13 00:31:17.294282 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:31:17.298747 systemd-logind[1449]: New session 22 of user core. May 13 00:31:17.308890 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:31:17.419275 sshd[5738]: pam_unix(sshd:session): session closed for user core May 13 00:31:17.423675 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:39884.service: Deactivated successfully. May 13 00:31:17.425829 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:31:17.426445 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. May 13 00:31:17.427281 systemd-logind[1449]: Removed session 22.