Jun 20 19:17:10.844579 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:17:10.844618 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:17:10.844631 kernel: BIOS-provided physical RAM map: Jun 20 19:17:10.844640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:17:10.844649 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:17:10.844658 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:17:10.844669 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jun 20 19:17:10.844681 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jun 20 19:17:10.844695 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 19:17:10.844704 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 19:17:10.844713 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:17:10.844723 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:17:10.844732 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:17:10.844758 kernel: NX (Execute Disable) protection: active Jun 20 19:17:10.844774 kernel: APIC: Static calls initialized Jun 20 19:17:10.844784 kernel: SMBIOS 2.8 present. Jun 20 19:17:10.844798 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 20 19:17:10.844808 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:17:10.844817 kernel: Hypervisor detected: KVM Jun 20 19:17:10.844827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:17:10.844836 kernel: kvm-clock: using sched offset of 6020418164 cycles Jun 20 19:17:10.844847 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:17:10.844858 kernel: tsc: Detected 2794.748 MHz processor Jun 20 19:17:10.844872 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:17:10.844882 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:17:10.844892 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jun 20 19:17:10.844903 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:17:10.844913 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:17:10.844923 kernel: Using GB pages for direct mapping Jun 20 19:17:10.844933 kernel: ACPI: Early table checksum verification disabled Jun 20 19:17:10.844943 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jun 20 19:17:10.844953 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.844967 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.844977 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.844997 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 20 19:17:10.845007 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.845018 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.845028 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.845038 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:17:10.845049 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jun 20 19:17:10.845067 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jun 20 19:17:10.845078 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 20 19:17:10.845088 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jun 20 19:17:10.845099 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jun 20 19:17:10.845110 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jun 20 19:17:10.845120 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jun 20 19:17:10.845134 kernel: No NUMA configuration found Jun 20 19:17:10.845145 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jun 20 19:17:10.845155 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jun 20 19:17:10.845166 kernel: Zone ranges: Jun 20 19:17:10.845177 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:17:10.845187 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jun 20 19:17:10.845198 kernel: Normal empty Jun 20 19:17:10.845208 kernel: Device empty Jun 20 19:17:10.845219 kernel: Movable zone start for each node Jun 20 19:17:10.845229 kernel: Early memory node ranges Jun 20 19:17:10.845244 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:17:10.845254 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jun 20 19:17:10.845264 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jun 20 19:17:10.845275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:17:10.845285 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:17:10.845296 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 19:17:10.845307 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:17:10.845322 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:17:10.845333 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:17:10.845346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:17:10.845356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:17:10.845369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:17:10.845379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:17:10.845389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:17:10.845399 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:17:10.845409 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:17:10.845420 kernel: TSC deadline timer available Jun 20 19:17:10.845430 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:17:10.845456 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:17:10.845467 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:17:10.845489 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:17:10.845499 kernel: CPU topo: Num. cores per package: 4 Jun 20 19:17:10.845509 kernel: CPU topo: Num. threads per package: 4 Jun 20 19:17:10.845520 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 20 19:17:10.845530 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:17:10.845540 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 20 19:17:10.845555 kernel: kvm-guest: setup PV sched yield Jun 20 19:17:10.845565 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 19:17:10.845580 kernel: Booting paravirtualized kernel on KVM Jun 20 19:17:10.845590 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:17:10.845602 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 20 19:17:10.845612 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 20 19:17:10.845622 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 20 19:17:10.845632 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 20 19:17:10.845642 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:17:10.845652 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:17:10.845664 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:17:10.845678 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:17:10.845689 kernel: random: crng init done Jun 20 19:17:10.845699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:17:10.845710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:17:10.845720 kernel: Fallback order for Node 0: 0 Jun 20 19:17:10.845731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jun 20 19:17:10.845760 kernel: Policy zone: DMA32 Jun 20 19:17:10.845771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:17:10.845784 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 20 19:17:10.845795 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:17:10.845805 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:17:10.845816 kernel: Dynamic Preempt: voluntary Jun 20 19:17:10.845826 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:17:10.845838 kernel: rcu: RCU event tracing is enabled. Jun 20 19:17:10.845849 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 20 19:17:10.845859 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:17:10.845874 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:17:10.845888 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:17:10.845898 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:17:10.845909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 20 19:17:10.845920 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:17:10.845931 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:17:10.845941 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:17:10.845952 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 20 19:17:10.845963 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:17:10.845994 kernel: Console: colour VGA+ 80x25 Jun 20 19:17:10.846006 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:17:10.846017 kernel: ACPI: Core revision 20240827 Jun 20 19:17:10.846028 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:17:10.846042 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:17:10.846052 kernel: x2apic enabled Jun 20 19:17:10.846066 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:17:10.846076 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 20 19:17:10.846088 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 20 19:17:10.846101 kernel: kvm-guest: setup PV IPIs Jun 20 19:17:10.846111 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:17:10.846122 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:17:10.846133 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jun 20 19:17:10.846144 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:17:10.846155 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:17:10.846165 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:17:10.846176 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:17:10.846189 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:17:10.846200 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:17:10.846210 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 19:17:10.846221 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 19:17:10.846232 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:17:10.846243 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:17:10.846254 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 20 19:17:10.846265 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 20 19:17:10.846276 kernel: x86/bugs: return thunk changed Jun 20 19:17:10.846290 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 20 19:17:10.846301 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:17:10.846312 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:17:10.846323 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:17:10.846334 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:17:10.846345 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:17:10.846356 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:17:10.846367 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:17:10.846381 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:17:10.846391 kernel: landlock: Up and running. Jun 20 19:17:10.846402 kernel: SELinux: Initializing. Jun 20 19:17:10.846412 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:17:10.846438 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:17:10.846449 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 19:17:10.846460 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:17:10.846470 kernel: ... version: 0 Jun 20 19:17:10.846481 kernel: ... bit width: 48 Jun 20 19:17:10.846496 kernel: ... generic registers: 6 Jun 20 19:17:10.846506 kernel: ... value mask: 0000ffffffffffff Jun 20 19:17:10.846517 kernel: ... max period: 00007fffffffffff Jun 20 19:17:10.846527 kernel: ... fixed-purpose events: 0 Jun 20 19:17:10.846538 kernel: ... event mask: 000000000000003f Jun 20 19:17:10.846548 kernel: signal: max sigframe size: 1776 Jun 20 19:17:10.846559 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:17:10.846570 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:17:10.846581 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:17:10.846592 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:17:10.846606 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:17:10.846617 kernel: .... node #0, CPUs: #1 #2 #3 Jun 20 19:17:10.846628 kernel: smp: Brought up 1 node, 4 CPUs Jun 20 19:17:10.846638 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jun 20 19:17:10.846650 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jun 20 19:17:10.846661 kernel: devtmpfs: initialized Jun 20 19:17:10.846672 kernel: x86/mm: Memory block size: 128MB Jun 20 19:17:10.846683 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:17:10.846694 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 20 19:17:10.846709 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:17:10.846719 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:17:10.846730 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:17:10.846757 kernel: audit: type=2000 audit(1750447027.628:1): state=initialized audit_enabled=0 res=1 Jun 20 19:17:10.846768 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:17:10.846779 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:17:10.846790 kernel: cpuidle: using governor menu Jun 20 19:17:10.846801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:17:10.846815 kernel: dca service started, version 1.12.1 Jun 20 19:17:10.846830 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jun 20 19:17:10.846841 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 20 19:17:10.846852 kernel: PCI: Using configuration type 1 for base access Jun 20 19:17:10.846863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:17:10.846874 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:17:10.846885 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:17:10.846896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:17:10.846907 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:17:10.846921 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:17:10.846932 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:17:10.846943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:17:10.846954 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:17:10.846965 kernel: ACPI: Interpreter enabled Jun 20 19:17:10.846977 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:17:10.846996 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:17:10.847008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:17:10.847019 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:17:10.847030 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:17:10.847044 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:17:10.847332 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:17:10.847490 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:17:10.847643 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:17:10.847659 kernel: PCI host bridge to bus 0000:00 Jun 20 19:17:10.847842 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:17:10.848002 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:17:10.848172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:17:10.848319 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jun 20 19:17:10.848506 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:17:10.848656 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 19:17:10.848820 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:17:10.849022 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:17:10.849236 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:17:10.849386 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jun 20 19:17:10.849526 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jun 20 19:17:10.849650 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jun 20 19:17:10.849823 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:17:10.850026 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:17:10.850201 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jun 20 19:17:10.850375 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jun 20 19:17:10.850539 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jun 20 19:17:10.850752 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:17:10.850927 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:17:10.851106 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jun 20 19:17:10.851274 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jun 20 19:17:10.851469 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:17:10.851648 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jun 20 19:17:10.851838 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jun 20 19:17:10.852041 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 20 19:17:10.852198 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jun 20 19:17:10.852374 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:17:10.852535 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:17:10.852714 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 20 19:17:10.852892 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jun 20 19:17:10.853074 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jun 20 19:17:10.853251 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 20 19:17:10.853423 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jun 20 19:17:10.853456 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:17:10.853468 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:17:10.853484 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:17:10.853498 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:17:10.853509 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:17:10.853520 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:17:10.853531 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:17:10.853543 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:17:10.853554 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:17:10.853565 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:17:10.853576 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:17:10.853591 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:17:10.853602 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:17:10.853613 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:17:10.853624 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:17:10.853635 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:17:10.853646 kernel: iommu: Default domain type: Translated Jun 20 19:17:10.853657 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:17:10.853668 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:17:10.853678 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:17:10.853693 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:17:10.853704 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jun 20 19:17:10.853897 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:17:10.854104 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:17:10.854262 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:17:10.854278 kernel: vgaarb: loaded Jun 20 19:17:10.854290 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:17:10.854302 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:17:10.854318 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:17:10.854330 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:17:10.854342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:17:10.854353 kernel: pnp: PnP ACPI init Jun 20 19:17:10.854569 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 19:17:10.854587 kernel: pnp: PnP ACPI: found 6 devices Jun 20 19:17:10.854599 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:17:10.854610 kernel: NET: Registered PF_INET protocol family Jun 20 19:17:10.854621 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:17:10.854637 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:17:10.854648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:17:10.854660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:17:10.854671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:17:10.854682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:17:10.854693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:17:10.854704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:17:10.854715 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:17:10.854729 kernel: NET: Registered PF_XDP protocol family Jun 20 19:17:10.854923 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:17:10.855077 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:17:10.855224 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:17:10.855383 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jun 20 19:17:10.855545 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 19:17:10.855695 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 19:17:10.855711 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:17:10.855723 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jun 20 19:17:10.855758 kernel: Initialise system trusted keyrings Jun 20 19:17:10.855769 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:17:10.855781 kernel: Key type asymmetric registered Jun 20 19:17:10.855791 kernel: Asymmetric key parser 'x509' registered Jun 20 19:17:10.855802 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:17:10.855814 kernel: io scheduler mq-deadline registered Jun 20 19:17:10.855825 kernel: io scheduler kyber registered Jun 20 19:17:10.855836 kernel: io scheduler bfq registered Jun 20 19:17:10.855847 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:17:10.855863 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:17:10.855874 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:17:10.855885 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 20 19:17:10.855907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:17:10.855921 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:17:10.855940 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:17:10.855952 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:17:10.855963 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:17:10.855975 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:17:10.856203 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:17:10.856375 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:17:10.856534 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:17:10 UTC (1750447030) Jun 20 19:17:10.856676 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 19:17:10.856691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:17:10.856701 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:17:10.856712 kernel: Segment Routing with IPv6 Jun 20 19:17:10.856723 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:17:10.856757 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:17:10.856768 kernel: Key type dns_resolver registered Jun 20 19:17:10.856779 kernel: IPI shorthand broadcast: enabled Jun 20 19:17:10.856790 kernel: sched_clock: Marking stable (3382004449, 113652129)->(3533566350, -37909772) Jun 20 19:17:10.856801 kernel: registered taskstats version 1 Jun 20 19:17:10.856813 kernel: Loading compiled-in X.509 certificates Jun 20 19:17:10.856824 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:17:10.856835 kernel: Demotion targets for Node 0: null Jun 20 19:17:10.856845 kernel: Key type .fscrypt registered Jun 20 19:17:10.856861 kernel: Key type fscrypt-provisioning registered Jun 20 19:17:10.856875 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:17:10.856889 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:17:10.856903 kernel: ima: No architecture policies found Jun 20 19:17:10.856916 kernel: clk: Disabling unused clocks Jun 20 19:17:10.856930 kernel: Warning: unable to open an initial console. Jun 20 19:17:10.856954 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:17:10.856997 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:17:10.857016 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:17:10.857027 kernel: Run /init as init process Jun 20 19:17:10.857038 kernel: with arguments: Jun 20 19:17:10.857048 kernel: /init Jun 20 19:17:10.857059 kernel: with environment: Jun 20 19:17:10.857069 kernel: HOME=/ Jun 20 19:17:10.857080 kernel: TERM=linux Jun 20 19:17:10.857091 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:17:10.857108 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:17:10.857129 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:17:10.857159 systemd[1]: Detected virtualization kvm. Jun 20 19:17:10.857171 systemd[1]: Detected architecture x86-64. Jun 20 19:17:10.857183 systemd[1]: Running in initrd. Jun 20 19:17:10.857195 systemd[1]: No hostname configured, using default hostname. Jun 20 19:17:10.857222 systemd[1]: Hostname set to . Jun 20 19:17:10.857235 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:17:10.857247 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:17:10.857260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:17:10.857272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:17:10.857285 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:17:10.857298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:17:10.857327 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:17:10.857354 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:17:10.857372 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:17:10.857396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:17:10.857419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:17:10.857439 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:17:10.857452 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:17:10.857463 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:17:10.857479 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:17:10.857498 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:17:10.857516 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:17:10.857528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:17:10.857543 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:17:10.857555 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:17:10.857567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:17:10.857578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:17:10.857587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:17:10.857601 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:17:10.857610 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:17:10.857619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:17:10.857627 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:17:10.857637 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:17:10.857651 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:17:10.857660 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:17:10.857669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:17:10.857678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:17:10.857687 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:17:10.857697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:17:10.857708 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:17:10.857717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:17:10.857778 systemd-journald[220]: Collecting audit messages is disabled. Jun 20 19:17:10.857849 systemd-journald[220]: Journal started Jun 20 19:17:10.857874 systemd-journald[220]: Runtime Journal (/run/log/journal/85254ef6de75430583c7a888ffe132c2) is 6M, max 48.6M, 42.5M free. Jun 20 19:17:10.842139 systemd-modules-load[222]: Inserted module 'overlay' Jun 20 19:17:10.902339 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:17:10.904861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:17:10.915676 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:17:10.958245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:17:10.958289 kernel: Bridge firewalling registered Jun 20 19:17:10.918861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:17:10.922235 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:17:10.933601 systemd-modules-load[222]: Inserted module 'br_netfilter' Jun 20 19:17:10.964070 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:17:10.967771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:17:10.970580 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:17:10.974899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:17:10.977546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:17:10.981138 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:17:10.997166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:17:10.999570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:17:11.013928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:17:11.016675 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:17:11.057348 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:17:11.063802 systemd-resolved[254]: Positive Trust Anchors: Jun 20 19:17:11.063829 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:17:11.063863 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:17:11.068710 systemd-resolved[254]: Defaulting to hostname 'linux'. Jun 20 19:17:11.070395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:17:11.076341 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:17:11.189795 kernel: SCSI subsystem initialized Jun 20 19:17:11.198780 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:17:11.210779 kernel: iscsi: registered transport (tcp) Jun 20 19:17:11.233777 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:17:11.233846 kernel: QLogic iSCSI HBA Driver Jun 20 19:17:11.254605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:17:11.276178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:17:11.281100 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:17:11.342187 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:17:11.344862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:17:11.412808 kernel: raid6: avx2x4 gen() 23715 MB/s Jun 20 19:17:11.429798 kernel: raid6: avx2x2 gen() 18962 MB/s Jun 20 19:17:11.447063 kernel: raid6: avx2x1 gen() 20983 MB/s Jun 20 19:17:11.447142 kernel: raid6: using algorithm avx2x4 gen() 23715 MB/s Jun 20 19:17:11.464983 kernel: raid6: .... xor() 6440 MB/s, rmw enabled Jun 20 19:17:11.465096 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:17:11.489809 kernel: xor: automatically using best checksumming function avx Jun 20 19:17:11.704807 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:17:11.716970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:17:11.720388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:17:11.760389 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jun 20 19:17:11.766399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:17:11.769404 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:17:11.799239 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jun 20 19:17:11.835388 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:17:11.840154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:17:11.927277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:17:11.933286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:17:11.996806 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 20 19:17:12.000623 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 20 19:17:12.011398 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:17:12.011530 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:17:12.011599 kernel: GPT:9289727 != 19775487 Jun 20 19:17:12.011622 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:17:12.011633 kernel: GPT:9289727 != 19775487 Jun 20 19:17:12.011643 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:17:12.011654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:17:12.016767 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:17:12.035776 kernel: AES CTR mode by8 optimization enabled Jun 20 19:17:12.035844 kernel: libata version 3.00 loaded. Jun 20 19:17:12.050767 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:17:12.052893 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:17:12.058759 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 20 19:17:12.059017 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 20 19:17:12.059194 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:17:12.063378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:17:12.063556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:17:12.068752 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:17:12.074236 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:17:12.078437 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:17:12.081870 kernel: scsi host0: ahci Jun 20 19:17:12.085797 kernel: scsi host1: ahci Jun 20 19:17:12.086762 kernel: scsi host2: ahci Jun 20 19:17:12.089978 kernel: scsi host3: ahci Jun 20 19:17:12.091104 kernel: scsi host4: ahci Jun 20 19:17:12.093635 kernel: scsi host5: ahci Jun 20 19:17:12.093918 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jun 20 19:17:12.093936 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jun 20 19:17:12.095294 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jun 20 19:17:12.095319 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jun 20 19:17:12.097143 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jun 20 19:17:12.097167 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jun 20 19:17:12.102015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:17:12.126403 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:17:12.163979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:17:12.165692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:17:12.169205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:17:12.182663 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:17:12.186330 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:17:12.244987 disk-uuid[633]: Primary Header is updated. Jun 20 19:17:12.244987 disk-uuid[633]: Secondary Entries is updated. Jun 20 19:17:12.244987 disk-uuid[633]: Secondary Header is updated. Jun 20 19:17:12.250773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:17:12.255774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:17:12.410119 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:17:12.410202 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 20 19:17:12.410214 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:17:12.410230 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 19:17:12.411772 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:17:12.412795 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:17:12.412824 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 19:17:12.413772 kernel: ata3.00: applying bridge limits Jun 20 19:17:12.414782 kernel: ata3.00: configured for UDMA/100 Jun 20 19:17:12.414805 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:17:12.483922 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 19:17:12.484450 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:17:12.505783 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:17:12.908976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:17:12.910878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:17:12.913066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:17:12.914336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:17:12.917825 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:17:12.954100 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:17:13.256793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:17:13.257051 disk-uuid[634]: The operation has completed successfully. Jun 20 19:17:13.299273 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:17:13.299407 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:17:13.339314 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:17:13.362205 sh[662]: Success Jun 20 19:17:13.381784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:17:13.381882 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:17:13.381901 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:17:13.394804 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:17:13.433016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:17:13.437355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:17:13.462623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:17:13.467776 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:17:13.467811 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (674) Jun 20 19:17:13.470630 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:17:13.470658 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:17:13.471519 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:17:13.478012 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:17:13.478898 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:17:13.480214 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:17:13.481476 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:17:13.484118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:17:13.526810 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (708) Jun 20 19:17:13.526884 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:17:13.528818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:17:13.528856 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:17:13.537803 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:17:13.538154 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:17:13.542238 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:17:13.681138 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:17:13.686218 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:17:13.690026 ignition[754]: Ignition 2.21.0 Jun 20 19:17:13.690924 ignition[754]: Stage: fetch-offline Jun 20 19:17:13.690962 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:13.690972 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:13.691097 ignition[754]: parsed url from cmdline: "" Jun 20 19:17:13.691101 ignition[754]: no config URL provided Jun 20 19:17:13.691107 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:17:13.691116 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:17:13.691147 ignition[754]: op(1): [started] loading QEMU firmware config module Jun 20 19:17:13.691153 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 20 19:17:13.702152 ignition[754]: op(1): [finished] loading QEMU firmware config module Jun 20 19:17:13.745626 ignition[754]: parsing config with SHA512: 4181323c2ea1fcc12468835bdae4a7413646024f5af5b0f92eea129cf5c75b7d3799dd07ed56e113d6ce0253de5a0194da3e39100091e2ecd55fd51259a1e897 Jun 20 19:17:13.750545 unknown[754]: fetched base config from "system" Jun 20 19:17:13.750563 unknown[754]: fetched user config from "qemu" Jun 20 19:17:13.751168 systemd-networkd[851]: lo: Link UP Jun 20 19:17:13.751174 systemd-networkd[851]: lo: Gained carrier Jun 20 19:17:13.753219 systemd-networkd[851]: Enumeration completed Jun 20 19:17:13.754814 ignition[754]: fetch-offline: fetch-offline passed Jun 20 19:17:13.753850 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:17:13.754934 ignition[754]: Ignition finished successfully Jun 20 19:17:13.754261 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:17:13.754267 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:17:13.755038 systemd-networkd[851]: eth0: Link UP Jun 20 19:17:13.755043 systemd-networkd[851]: eth0: Gained carrier Jun 20 19:17:13.755054 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:17:13.773352 systemd[1]: Reached target network.target - Network. Jun 20 19:17:13.776030 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:17:13.779002 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:17:13.780732 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:17:13.794820 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:17:13.838106 ignition[857]: Ignition 2.21.0 Jun 20 19:17:13.838128 ignition[857]: Stage: kargs Jun 20 19:17:13.838316 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:13.838333 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:13.840148 ignition[857]: kargs: kargs passed Jun 20 19:17:13.840235 ignition[857]: Ignition finished successfully Jun 20 19:17:13.844849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:17:13.847375 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:17:13.926443 ignition[867]: Ignition 2.21.0 Jun 20 19:17:13.926465 ignition[867]: Stage: disks Jun 20 19:17:13.926673 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:13.926686 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:13.927652 ignition[867]: disks: disks passed Jun 20 19:17:13.930779 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:17:13.927707 ignition[867]: Ignition finished successfully Jun 20 19:17:13.932193 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:17:13.932974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:17:13.936657 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:17:13.936766 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:17:13.937332 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:17:13.939101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:17:13.967574 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:17:13.976487 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:17:13.979451 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:17:14.112805 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:17:14.113559 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:17:14.115312 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:17:14.118846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:17:14.121294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:17:14.122796 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:17:14.122856 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:17:14.122890 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:17:14.137864 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:17:14.139677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:17:14.146766 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (885) Jun 20 19:17:14.150218 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:17:14.150259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:17:14.150290 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:17:14.156975 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:17:14.251202 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:17:14.256929 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:17:14.263048 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:17:14.268677 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:17:14.378712 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:17:14.379859 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:17:14.383909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:17:14.404774 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:17:14.421827 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:17:14.442603 ignition[998]: INFO : Ignition 2.21.0 Jun 20 19:17:14.442603 ignition[998]: INFO : Stage: mount Jun 20 19:17:14.445488 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:14.445488 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:14.445488 ignition[998]: INFO : mount: mount passed Jun 20 19:17:14.445488 ignition[998]: INFO : Ignition finished successfully Jun 20 19:17:14.452599 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:17:14.455258 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:17:14.468530 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:17:14.495601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:17:14.535768 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1011) Jun 20 19:17:14.537992 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:17:14.538022 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:17:14.538038 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:17:14.542683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:17:15.060145 ignition[1028]: INFO : Ignition 2.21.0 Jun 20 19:17:15.062960 ignition[1028]: INFO : Stage: files Jun 20 19:17:15.062960 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:15.062960 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:15.062960 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:17:15.068104 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:17:15.068104 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:17:15.073462 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:17:15.075671 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:17:15.078288 unknown[1028]: wrote ssh authorized keys file for user: core Jun 20 19:17:15.080199 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:17:15.083059 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:17:15.085494 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:17:15.204474 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:17:15.556809 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:17:15.559420 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:17:15.578059 systemd-networkd[851]: eth0: Gained IPv6LL Jun 20 19:17:15.621270 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:17:15.623291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:17:15.623291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:17:15.635722 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:17:15.635722 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:17:15.640678 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:17:16.373074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:17:17.349840 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:17:17.349840 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:17:17.353969 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:17:17.439093 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:17:17.439093 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:17:17.439093 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 20 19:17:17.439093 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:17:17.447086 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:17:17.447086 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 20 19:17:17.447086 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:17:17.540985 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:17:17.550244 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:17:17.552294 ignition[1028]: INFO : files: files passed Jun 20 19:17:17.552294 ignition[1028]: INFO : Ignition finished successfully Jun 20 19:17:17.566973 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:17:17.570064 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:17:17.573015 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:17:17.589261 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:17:17.589467 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:17:17.595115 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jun 20 19:17:17.599873 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:17:17.599873 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:17:17.603412 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:17:17.607114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:17:17.608722 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:17:17.611254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:17:17.686692 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:17:17.686879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:17:17.690051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:17:17.692970 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:17:17.695727 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:17:17.699213 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:17:17.742031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:17:17.745797 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:17:17.801423 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:17:17.801659 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:17:17.805188 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:17:17.806388 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:17:17.806538 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:17:17.809049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:17:17.809429 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:17:17.809869 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:17:17.810481 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:17:17.810864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:17:17.811389 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:17:17.811762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:17:17.812346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:17:17.812703 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:17:17.813244 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:17:17.813619 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:17:17.814163 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:17:17.814296 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:17:17.840309 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:17:17.840487 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:17:17.843127 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:17:17.846673 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:17:17.849405 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:17:17.849565 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:17:17.852952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:17:17.853115 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:17:17.854374 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:17:17.857450 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:17:17.861875 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:17:17.862042 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:17:17.865658 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:17:17.866770 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:17:17.866914 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:17:17.867333 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:17:17.867430 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:17:17.872589 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:17:17.872768 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:17:17.873665 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:17:17.873841 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:17:17.877305 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:17:17.878939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:17:17.879061 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:17:17.881938 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:17:17.882587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:17:17.882819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:17:17.892045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:17:17.892261 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:17:17.901945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:17:17.903360 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:17:17.993667 ignition[1083]: INFO : Ignition 2.21.0 Jun 20 19:17:17.993667 ignition[1083]: INFO : Stage: umount Jun 20 19:17:17.996542 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:17:17.996542 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:17:17.996542 ignition[1083]: INFO : umount: umount passed Jun 20 19:17:17.996542 ignition[1083]: INFO : Ignition finished successfully Jun 20 19:17:17.997952 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:17:17.998826 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:17:17.998989 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:17:18.002069 systemd[1]: Stopped target network.target - Network. Jun 20 19:17:18.003342 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:17:18.003444 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:17:18.005638 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:17:18.005710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:17:18.007878 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:17:18.007937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:17:18.010002 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:17:18.010068 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:17:18.012488 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:17:18.014486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:17:18.017007 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:17:18.017152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:17:18.019236 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:17:18.019398 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:17:18.025553 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:17:18.025959 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:17:18.026131 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:17:18.029666 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:17:18.031625 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:17:18.033541 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:17:18.033619 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:17:18.035907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:17:18.035990 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:17:18.039639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:17:18.041990 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:17:18.042065 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:17:18.044530 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:17:18.044598 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:17:18.047098 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:17:18.047157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:17:18.049486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:17:18.049543 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:17:18.053540 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:17:18.057287 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:17:18.057362 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:17:18.067701 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:17:18.077984 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:17:18.079963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:17:18.080022 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:17:18.082007 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:17:18.082057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:17:18.083151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:17:18.083208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:17:18.084060 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:17:18.084111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:17:18.089664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:17:18.089724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:17:18.091729 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:17:18.095557 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:17:18.095617 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:17:18.099463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:17:18.099522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:17:18.103441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:17:18.103491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:17:18.108769 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:17:18.108845 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:17:18.108899 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:17:18.109326 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:17:18.113066 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:17:18.119016 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:17:18.119145 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:17:18.121471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:17:18.125553 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:17:18.154188 systemd[1]: Switching root. Jun 20 19:17:18.203333 systemd-journald[220]: Journal stopped Jun 20 19:17:19.571403 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jun 20 19:17:19.571463 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:17:19.571489 kernel: SELinux: policy capability open_perms=1 Jun 20 19:17:19.571501 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:17:19.571512 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:17:19.571529 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:17:19.571553 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:17:19.571568 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:17:19.571582 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:17:19.571593 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:17:19.571604 kernel: audit: type=1403 audit(1750447038.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:17:19.571617 systemd[1]: Successfully loaded SELinux policy in 56.607ms. Jun 20 19:17:19.571645 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.199ms. Jun 20 19:17:19.571662 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:17:19.571675 systemd[1]: Detected virtualization kvm. Jun 20 19:17:19.571690 systemd[1]: Detected architecture x86-64. Jun 20 19:17:19.571702 systemd[1]: Detected first boot. Jun 20 19:17:19.571716 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:17:19.571728 zram_generator::config[1129]: No configuration found. Jun 20 19:17:19.571773 kernel: Guest personality initialized and is inactive Jun 20 19:17:19.571788 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:17:19.571799 kernel: Initialized host personality Jun 20 19:17:19.571812 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:17:19.571832 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:17:19.571847 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:17:19.571859 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:17:19.571871 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:17:19.571884 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:17:19.571899 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:17:19.571915 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:17:19.571928 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:17:19.571943 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:17:19.571962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:17:19.571975 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:17:19.571987 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:17:19.571999 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:17:19.572011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:17:19.572023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:17:19.572037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:17:19.572051 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:17:19.572072 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:17:19.572086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:17:19.572100 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:17:19.572116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:17:19.572129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:17:19.572141 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:17:19.572153 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:17:19.572166 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:17:19.572182 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:17:19.572199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:17:19.572213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:17:19.572226 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:17:19.572242 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:17:19.572260 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:17:19.572274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:17:19.572286 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:17:19.572302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:17:19.572321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:17:19.572334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:17:19.572352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:17:19.572369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:17:19.572381 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:17:19.572393 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:17:19.572417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:19.572431 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:17:19.572443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:17:19.572459 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:17:19.572472 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:17:19.572485 systemd[1]: Reached target machines.target - Containers. Jun 20 19:17:19.572498 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:17:19.572514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:17:19.572529 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:17:19.572542 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:17:19.572557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:17:19.572574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:17:19.572589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:17:19.572601 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:17:19.572614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:17:19.572626 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:17:19.572639 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:17:19.572653 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:17:19.572665 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:17:19.572677 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:17:19.572695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:17:19.572712 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:17:19.572724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:17:19.573773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:17:19.573816 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:17:19.573829 kernel: loop: module loaded Jun 20 19:17:19.573843 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:17:19.573856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:17:19.573878 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:17:19.573896 systemd[1]: Stopped verity-setup.service. Jun 20 19:17:19.573911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:19.573940 kernel: fuse: init (API version 7.41) Jun 20 19:17:19.573954 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:17:19.573966 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:17:19.573985 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:17:19.574000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:17:19.574013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:17:19.574026 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:17:19.574039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:17:19.574105 systemd-journald[1200]: Collecting audit messages is disabled. Jun 20 19:17:19.574137 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:17:19.574154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:17:19.574167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:17:19.574180 systemd-journald[1200]: Journal started Jun 20 19:17:19.574208 systemd-journald[1200]: Runtime Journal (/run/log/journal/85254ef6de75430583c7a888ffe132c2) is 6M, max 48.6M, 42.5M free. Jun 20 19:17:19.290429 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:17:19.317233 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:17:19.317831 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:17:19.575224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:17:19.577843 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:17:19.581832 kernel: ACPI: bus type drm_connector registered Jun 20 19:17:19.581722 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:17:19.583456 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:17:19.583709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:17:19.585166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:17:19.585405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:17:19.587132 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:17:19.587368 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:17:19.588950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:17:19.589167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:17:19.591026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:17:19.592611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:17:19.594294 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:17:19.595941 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:17:19.611930 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:17:19.615336 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:17:19.617573 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:17:19.618791 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:17:19.618822 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:17:19.620884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:17:19.627122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:17:19.628940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:17:19.630827 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:17:19.633291 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:17:19.634556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:17:19.636927 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:17:19.638119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:17:19.639786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:17:19.645087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:17:19.648864 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:17:19.652485 systemd-journald[1200]: Time spent on flushing to /var/log/journal/85254ef6de75430583c7a888ffe132c2 is 28.879ms for 977 entries. Jun 20 19:17:19.652485 systemd-journald[1200]: System Journal (/var/log/journal/85254ef6de75430583c7a888ffe132c2) is 8M, max 195.6M, 187.6M free. Jun 20 19:17:19.702496 systemd-journald[1200]: Received client request to flush runtime journal. Jun 20 19:17:19.702632 kernel: loop0: detected capacity change from 0 to 113872 Jun 20 19:17:19.652588 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:17:19.655440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:17:19.669485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:17:19.672212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:17:19.677978 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:17:19.680786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:17:19.705831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:17:19.709331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:17:19.715788 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:17:19.723655 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:17:19.731386 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:17:19.765957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:17:19.772834 kernel: loop1: detected capacity change from 0 to 146240 Jun 20 19:17:19.809596 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jun 20 19:17:19.809620 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jun 20 19:17:19.813804 kernel: loop2: detected capacity change from 0 to 221472 Jun 20 19:17:19.818075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:17:19.849804 kernel: loop3: detected capacity change from 0 to 113872 Jun 20 19:17:19.910789 kernel: loop4: detected capacity change from 0 to 146240 Jun 20 19:17:19.929785 kernel: loop5: detected capacity change from 0 to 221472 Jun 20 19:17:19.939034 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 20 19:17:19.939718 (sd-merge)[1271]: Merged extensions into '/usr'. Jun 20 19:17:19.972089 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:17:19.972105 systemd[1]: Reloading... Jun 20 19:17:20.150058 zram_generator::config[1293]: No configuration found. Jun 20 19:17:20.280399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:17:20.326710 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:17:20.378485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:17:20.380250 systemd[1]: Reloading finished in 407 ms. Jun 20 19:17:20.417044 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:17:20.420355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:17:20.444595 systemd[1]: Starting ensure-sysext.service... Jun 20 19:17:20.446980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:17:20.473847 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:17:20.473871 systemd[1]: Reloading... Jun 20 19:17:20.492901 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:17:20.492954 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:17:20.493404 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:17:20.493838 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:17:20.494984 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:17:20.495348 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jun 20 19:17:20.495449 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jun 20 19:17:20.505664 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:17:20.505682 systemd-tmpfiles[1336]: Skipping /boot Jun 20 19:17:20.562498 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:17:20.562518 systemd-tmpfiles[1336]: Skipping /boot Jun 20 19:17:20.587810 zram_generator::config[1367]: No configuration found. Jun 20 19:17:20.696856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:17:20.801119 systemd[1]: Reloading finished in 326 ms. Jun 20 19:17:20.849769 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:17:20.862365 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:17:20.886031 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:17:20.889095 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:17:20.893250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:17:20.901023 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:17:20.907963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:20.908148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:17:20.910991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:17:20.917308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:17:20.920340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:17:20.921931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:17:20.922079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:17:20.932604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:17:20.934227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:20.936082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:17:20.941688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:17:20.942189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:17:20.944334 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:17:20.950266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:17:20.954925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:17:20.955385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:17:20.968488 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:17:21.002266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:17:21.009758 systemd[1]: Finished ensure-sysext.service. Jun 20 19:17:21.012034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:21.012311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:17:21.013912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:17:21.016637 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:17:21.022522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:17:21.027965 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:17:21.029813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:17:21.029863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:17:21.032316 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:17:21.035756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:17:21.038896 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:17:21.040246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:17:21.040831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:17:21.042433 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:17:21.044088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:17:21.049040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:17:21.051020 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:17:21.051239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:17:21.052977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:17:21.053266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:17:21.056185 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:17:21.056480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:17:21.097947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:17:21.098042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:17:21.098071 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:17:21.106694 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:17:21.108187 augenrules[1454]: No rules Jun 20 19:17:21.109621 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:17:21.110085 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:17:21.117260 systemd-udevd[1443]: Using default interface naming scheme 'v255'. Jun 20 19:17:21.139927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:17:21.147064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:17:21.182594 systemd-resolved[1405]: Positive Trust Anchors: Jun 20 19:17:21.183039 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:17:21.183124 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:17:21.195630 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jun 20 19:17:21.200021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:17:21.201571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:17:21.208068 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:17:21.209806 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:17:21.211218 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:17:21.216285 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:17:21.217973 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:17:21.219661 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:17:21.229589 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:17:21.229629 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:17:21.230873 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:17:21.236958 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:17:21.238489 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:17:21.239981 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:17:21.249032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:17:21.253055 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:17:21.338256 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:17:21.339967 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:17:21.341448 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:17:21.350494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:17:21.357769 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:17:21.361013 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:17:21.364450 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:17:21.376372 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:17:21.386052 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:17:21.386392 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:17:21.387837 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:17:21.387885 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:17:21.390088 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:17:21.393431 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:17:21.397520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:17:21.407374 systemd-networkd[1467]: lo: Link UP Jun 20 19:17:21.407386 systemd-networkd[1467]: lo: Gained carrier Jun 20 19:17:21.409196 systemd-networkd[1467]: Enumeration completed Jun 20 19:17:21.410245 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:17:21.411539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:17:21.413036 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:17:21.417082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:17:21.415266 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:17:21.415272 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:17:21.416026 systemd-networkd[1467]: eth0: Link UP Jun 20 19:17:21.416247 systemd-networkd[1467]: eth0: Gained carrier Jun 20 19:17:21.416261 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:17:21.417325 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:17:21.420915 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:17:21.428268 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:17:21.428562 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:17:21.426144 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:17:21.432275 jq[1504]: false Jun 20 19:17:21.433102 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:17:21.433021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:17:21.434818 systemd-networkd[1467]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:17:21.435469 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jun 20 19:17:21.982428 extend-filesystems[1505]: Found /dev/vda6 Jun 20 19:17:21.977391 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 20 19:17:21.998573 extend-filesystems[1505]: Found /dev/vda9 Jun 20 19:17:21.998573 extend-filesystems[1505]: Checking size of /dev/vda9 Jun 20 19:17:21.977443 systemd-timesyncd[1442]: Initial clock synchronization to Fri 2025-06-20 19:17:21.977293 UTC. Jun 20 19:17:21.978516 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:17:21.980446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:17:21.980990 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:17:21.982135 systemd-resolved[1405]: Clock change detected. Flushing caches. Jun 20 19:17:21.983950 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:17:21.999274 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jun 20 19:17:21.999291 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Jun 20 19:17:21.999501 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:17:22.000449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:17:22.004520 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:17:22.006224 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:17:22.006669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:17:22.007109 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:17:22.007439 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:17:22.009980 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:17:22.010564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:17:22.016412 jq[1529]: true Jun 20 19:17:22.028755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:17:22.029542 jq[1534]: true Jun 20 19:17:22.035722 systemd[1]: Reached target network.target - Network. Jun 20 19:17:22.040781 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:17:22.049798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:17:22.050534 update_engine[1519]: I20250620 19:17:22.050449 1519 main.cc:92] Flatcar Update Engine starting Jun 20 19:17:22.054064 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Jun 20 19:17:22.054064 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:17:22.054064 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Jun 20 19:17:22.053548 oslogin_cache_refresh[1507]: Failure getting users, quitting Jun 20 19:17:22.053573 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:17:22.053626 oslogin_cache_refresh[1507]: Refreshing group entry cache Jun 20 19:17:22.055030 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:17:22.063603 dbus-daemon[1501]: [system] SELinux support is enabled Jun 20 19:17:22.064403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:17:22.066025 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:17:22.068584 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Jun 20 19:17:22.068584 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:17:22.066576 oslogin_cache_refresh[1507]: Failure getting groups, quitting Jun 20 19:17:22.066591 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:17:22.069891 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:17:22.074007 extend-filesystems[1505]: Resized partition /dev/vda9 Jun 20 19:17:22.071388 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:17:22.075482 tar[1530]: linux-amd64/helm Jun 20 19:17:22.076490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:17:22.076535 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:17:22.076906 update_engine[1519]: I20250620 19:17:22.076753 1519 update_check_scheduler.cc:74] Next update check in 4m5s Jun 20 19:17:22.077894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:17:22.077915 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:17:22.079754 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:17:22.092259 extend-filesystems[1562]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:17:22.116550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:17:22.118711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:17:22.137720 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:17:22.143337 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 20 19:17:22.151812 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:17:22.180460 systemd-logind[1515]: New seat seat0. Jun 20 19:17:22.183399 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:17:22.188054 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:17:22.312419 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:17:22.322603 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:17:22.372641 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 20 19:17:22.380393 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:17:22.380502 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:17:22.382931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:17:22.634127 kernel: kvm_amd: TSC scaling supported Jun 20 19:17:22.634177 kernel: kvm_amd: Nested Virtualization enabled Jun 20 19:17:22.634192 kernel: kvm_amd: Nested Paging enabled Jun 20 19:17:22.634205 kernel: kvm_amd: LBR virtualization supported Jun 20 19:17:22.634219 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 20 19:17:22.634231 kernel: kvm_amd: Virtual GIF supported Jun 20 19:17:22.634244 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:17:22.395031 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:17:22.395479 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:17:22.445604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:17:22.481448 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:17:22.500096 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:17:22.503739 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:17:22.507653 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:17:22.511695 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:17:22.636163 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:17:22.636163 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:17:22.636163 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 20 19:17:22.673965 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Jun 20 19:17:22.637997 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:17:22.638548 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:17:22.764555 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:17:22.768890 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:17:22.771606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:17:22.775612 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:17:22.928701 containerd[1580]: time="2025-06-20T19:17:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:17:22.933552 containerd[1580]: time="2025-06-20T19:17:22.933513361Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:17:22.945307 containerd[1580]: time="2025-06-20T19:17:22.945233174Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.826µs" Jun 20 19:17:22.945307 containerd[1580]: time="2025-06-20T19:17:22.945284390Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:17:22.945307 containerd[1580]: time="2025-06-20T19:17:22.945303556Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:17:22.945578 containerd[1580]: time="2025-06-20T19:17:22.945549828Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:17:22.945578 containerd[1580]: time="2025-06-20T19:17:22.945569905Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:17:22.945620 containerd[1580]: time="2025-06-20T19:17:22.945599220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:17:22.945696 containerd[1580]: time="2025-06-20T19:17:22.945670283Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:17:22.945696 containerd[1580]: time="2025-06-20T19:17:22.945686524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946028 containerd[1580]: time="2025-06-20T19:17:22.945992889Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946028 containerd[1580]: time="2025-06-20T19:17:22.946014609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946028 containerd[1580]: time="2025-06-20T19:17:22.946025790Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946092 containerd[1580]: time="2025-06-20T19:17:22.946033946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946179 containerd[1580]: time="2025-06-20T19:17:22.946153239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946486 containerd[1580]: time="2025-06-20T19:17:22.946458883Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946510 containerd[1580]: time="2025-06-20T19:17:22.946498076Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:17:22.946530 containerd[1580]: time="2025-06-20T19:17:22.946508426Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:17:22.946622 containerd[1580]: time="2025-06-20T19:17:22.946558750Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:17:22.950774 containerd[1580]: time="2025-06-20T19:17:22.950737829Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:17:22.950864 containerd[1580]: time="2025-06-20T19:17:22.950846412Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:17:23.094061 containerd[1580]: time="2025-06-20T19:17:23.093972757Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094104704Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094130603Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094158765Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094178432Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094193090Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094208819Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:17:23.094232 containerd[1580]: time="2025-06-20T19:17:23.094227294Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:17:23.094493 containerd[1580]: time="2025-06-20T19:17:23.094260346Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:17:23.094493 containerd[1580]: time="2025-06-20T19:17:23.094276156Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:17:23.094493 containerd[1580]: time="2025-06-20T19:17:23.094291805Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:17:23.094493 containerd[1580]: time="2025-06-20T19:17:23.094327913Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:17:23.094663 containerd[1580]: time="2025-06-20T19:17:23.094601786Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:17:23.094663 containerd[1580]: time="2025-06-20T19:17:23.094642242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:17:23.094663 containerd[1580]: time="2025-06-20T19:17:23.094662871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:17:23.094663 containerd[1580]: time="2025-06-20T19:17:23.094678580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094693438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094735547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094754954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094791673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094806971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094818463Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:17:23.094874 containerd[1580]: time="2025-06-20T19:17:23.094830305Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:17:23.095031 containerd[1580]: time="2025-06-20T19:17:23.094955420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:17:23.095031 containerd[1580]: time="2025-06-20T19:17:23.094972993Z" level=info msg="Start snapshots syncer" Jun 20 19:17:23.095031 containerd[1580]: time="2025-06-20T19:17:23.095015573Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:17:23.147061 containerd[1580]: time="2025-06-20T19:17:23.146942513Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:17:23.147061 containerd[1580]: time="2025-06-20T19:17:23.147065954Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147275026Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147507332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147534413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147545564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147555332Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147573987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147588444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147599154Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147632958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147645181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147655049Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147679525Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147694183Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:17:23.147723 containerd[1580]: time="2025-06-20T19:17:23.147703440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147712146Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147719550Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147729018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147741932Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147788650Z" level=info msg="runtime interface created" Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147796234Z" level=info msg="created NRI interface" Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147814027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147838704Z" level=info msg="Connect containerd service" Jun 20 19:17:23.148119 containerd[1580]: time="2025-06-20T19:17:23.147861416Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:17:23.148848 containerd[1580]: time="2025-06-20T19:17:23.148811609Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:17:23.199460 tar[1530]: linux-amd64/LICENSE Jun 20 19:17:23.199858 tar[1530]: linux-amd64/README.md Jun 20 19:17:23.224885 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:17:23.321607 containerd[1580]: time="2025-06-20T19:17:23.321467452Z" level=info msg="Start subscribing containerd event" Jun 20 19:17:23.321607 containerd[1580]: time="2025-06-20T19:17:23.321533335Z" level=info msg="Start recovering state" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321669450Z" level=info msg="Start event monitor" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321691552Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321701200Z" level=info msg="Start streaming server" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321718322Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321726237Z" level=info msg="runtime interface starting up..." Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321733951Z" level=info msg="starting plugins..." Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321750082Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:17:23.321865 containerd[1580]: time="2025-06-20T19:17:23.321804053Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:17:23.322099 containerd[1580]: time="2025-06-20T19:17:23.321875226Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:17:23.322099 containerd[1580]: time="2025-06-20T19:17:23.321967469Z" level=info msg="containerd successfully booted in 0.393937s" Jun 20 19:17:23.322402 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:17:23.861627 systemd-networkd[1467]: eth0: Gained IPv6LL Jun 20 19:17:23.865492 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:17:23.867858 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:17:23.870919 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 20 19:17:23.874098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:17:23.900224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:17:23.921878 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:17:23.922206 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 20 19:17:23.924725 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:17:23.927286 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:17:24.819537 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:17:24.822500 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). Jun 20 19:17:25.016333 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:25.018930 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:25.032717 systemd-logind[1515]: New session 1 of user core. Jun 20 19:17:25.034411 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:17:25.036902 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:17:25.066610 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:17:25.084756 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:17:25.103042 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:17:25.106034 systemd-logind[1515]: New session c1 of user core. Jun 20 19:17:25.210309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:17:25.221573 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:17:25.233748 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:17:25.267558 systemd[1665]: Queued start job for default target default.target. Jun 20 19:17:25.282848 systemd[1665]: Created slice app.slice - User Application Slice. Jun 20 19:17:25.282884 systemd[1665]: Reached target paths.target - Paths. Jun 20 19:17:25.282942 systemd[1665]: Reached target timers.target - Timers. Jun 20 19:17:25.284999 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:17:25.299592 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:17:25.299751 systemd[1665]: Reached target sockets.target - Sockets. Jun 20 19:17:25.299800 systemd[1665]: Reached target basic.target - Basic System. Jun 20 19:17:25.299842 systemd[1665]: Reached target default.target - Main User Target. Jun 20 19:17:25.299881 systemd[1665]: Startup finished in 185ms. Jun 20 19:17:25.300564 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:17:25.311548 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:17:25.313153 systemd[1]: Startup finished in 3.447s (kernel) + 7.988s (initrd) + 6.205s (userspace) = 17.641s. Jun 20 19:17:25.377146 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Jun 20 19:17:25.435549 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:25.437619 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:25.443880 systemd-logind[1515]: New session 2 of user core. Jun 20 19:17:25.458517 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:17:25.515417 sshd[1693]: Connection closed by 10.0.0.1 port 54944 Jun 20 19:17:25.515731 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:25.524019 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:54944.service: Deactivated successfully. Jun 20 19:17:25.525944 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:17:25.526713 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:17:25.529795 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:54956.service - OpenSSH per-connection server daemon (10.0.0.1:54956). Jun 20 19:17:25.530819 systemd-logind[1515]: Removed session 2. Jun 20 19:17:25.594926 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 54956 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:25.596876 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:25.602237 systemd-logind[1515]: New session 3 of user core. Jun 20 19:17:25.610462 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:17:25.662483 sshd[1703]: Connection closed by 10.0.0.1 port 54956 Jun 20 19:17:25.662750 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:25.676298 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:54956.service: Deactivated successfully. Jun 20 19:17:25.678255 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:17:25.679130 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:17:25.682020 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:54968.service - OpenSSH per-connection server daemon (10.0.0.1:54968). Jun 20 19:17:25.682772 systemd-logind[1515]: Removed session 3. Jun 20 19:17:25.700773 kubelet[1676]: E0620 19:17:25.700694 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:17:25.705264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:17:25.705516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:17:25.705927 systemd[1]: kubelet.service: Consumed 1.538s CPU time, 265.6M memory peak. Jun 20 19:17:25.742788 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 54968 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:25.744745 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:25.749915 systemd-logind[1515]: New session 4 of user core. Jun 20 19:17:25.759677 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:17:25.813853 sshd[1712]: Connection closed by 10.0.0.1 port 54968 Jun 20 19:17:25.814122 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:25.827988 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:54968.service: Deactivated successfully. Jun 20 19:17:25.829702 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:17:25.830416 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:17:25.833008 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:54974.service - OpenSSH per-connection server daemon (10.0.0.1:54974). Jun 20 19:17:25.833744 systemd-logind[1515]: Removed session 4. Jun 20 19:17:25.889853 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:25.891363 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:25.895611 systemd-logind[1515]: New session 5 of user core. Jun 20 19:17:25.905437 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:17:25.962667 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:17:25.963042 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:17:25.979100 sudo[1722]: pam_unix(sudo:session): session closed for user root Jun 20 19:17:25.980675 sshd[1721]: Connection closed by 10.0.0.1 port 54974 Jun 20 19:17:25.981063 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:25.997999 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:54974.service: Deactivated successfully. Jun 20 19:17:25.999730 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:17:26.000410 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:17:26.002971 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:54986.service - OpenSSH per-connection server daemon (10.0.0.1:54986). Jun 20 19:17:26.003598 systemd-logind[1515]: Removed session 5. Jun 20 19:17:26.056785 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 54986 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:26.058151 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:26.062349 systemd-logind[1515]: New session 6 of user core. Jun 20 19:17:26.078438 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:17:26.132648 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:17:26.132953 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:17:26.204024 sudo[1732]: pam_unix(sudo:session): session closed for user root Jun 20 19:17:26.210065 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:17:26.210484 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:17:26.219842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:17:26.269693 augenrules[1754]: No rules Jun 20 19:17:26.271176 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:17:26.271488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:17:26.272498 sudo[1731]: pam_unix(sudo:session): session closed for user root Jun 20 19:17:26.274028 sshd[1730]: Connection closed by 10.0.0.1 port 54986 Jun 20 19:17:26.274379 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jun 20 19:17:26.285559 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:54986.service: Deactivated successfully. Jun 20 19:17:26.287105 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:17:26.287805 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:17:26.290113 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:55000.service - OpenSSH per-connection server daemon (10.0.0.1:55000). Jun 20 19:17:26.290938 systemd-logind[1515]: Removed session 6. Jun 20 19:17:26.345351 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 55000 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:17:26.346916 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:17:26.351568 systemd-logind[1515]: New session 7 of user core. Jun 20 19:17:26.361514 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:17:26.415340 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:17:26.415645 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:17:27.066234 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:17:27.086673 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:17:27.787575 dockerd[1788]: time="2025-06-20T19:17:27.787457122Z" level=info msg="Starting up" Jun 20 19:17:27.791420 dockerd[1788]: time="2025-06-20T19:17:27.790100450Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:17:28.229864 dockerd[1788]: time="2025-06-20T19:17:28.229680186Z" level=info msg="Loading containers: start." Jun 20 19:17:28.243348 kernel: Initializing XFRM netlink socket Jun 20 19:17:28.535691 systemd-networkd[1467]: docker0: Link UP Jun 20 19:17:28.543386 dockerd[1788]: time="2025-06-20T19:17:28.543302912Z" level=info msg="Loading containers: done." Jun 20 19:17:28.559282 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1914083411-merged.mount: Deactivated successfully. Jun 20 19:17:28.563516 dockerd[1788]: time="2025-06-20T19:17:28.563462456Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:17:28.563622 dockerd[1788]: time="2025-06-20T19:17:28.563594473Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:17:28.563777 dockerd[1788]: time="2025-06-20T19:17:28.563746619Z" level=info msg="Initializing buildkit" Jun 20 19:17:28.625766 dockerd[1788]: time="2025-06-20T19:17:28.625638851Z" level=info msg="Completed buildkit initialization" Jun 20 19:17:28.633092 dockerd[1788]: time="2025-06-20T19:17:28.633022150Z" level=info msg="Daemon has completed initialization" Jun 20 19:17:28.633262 dockerd[1788]: time="2025-06-20T19:17:28.633140662Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:17:28.633365 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:17:29.376110 containerd[1580]: time="2025-06-20T19:17:29.376009552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:17:30.098039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045524477.mount: Deactivated successfully. Jun 20 19:17:34.281531 containerd[1580]: time="2025-06-20T19:17:34.281439777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:34.319623 containerd[1580]: time="2025-06-20T19:17:34.319563957Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 20 19:17:34.350941 containerd[1580]: time="2025-06-20T19:17:34.350818532Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:34.392963 containerd[1580]: time="2025-06-20T19:17:34.392856513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:34.393797 containerd[1580]: time="2025-06-20T19:17:34.393752674Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 5.017674423s" Jun 20 19:17:34.393797 containerd[1580]: time="2025-06-20T19:17:34.393795003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:17:34.394502 containerd[1580]: time="2025-06-20T19:17:34.394465922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:17:35.762943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:17:35.765167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:17:36.384782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:17:36.390018 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:17:36.446209 kubelet[2062]: E0620 19:17:36.446121 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:17:36.453457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:17:36.453721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:17:36.454254 systemd[1]: kubelet.service: Consumed 308ms CPU time, 110.8M memory peak. Jun 20 19:17:40.485229 containerd[1580]: time="2025-06-20T19:17:40.485155214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:40.486361 containerd[1580]: time="2025-06-20T19:17:40.486303438Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 20 19:17:40.487861 containerd[1580]: time="2025-06-20T19:17:40.487817758Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:40.491088 containerd[1580]: time="2025-06-20T19:17:40.490987914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:40.492124 containerd[1580]: time="2025-06-20T19:17:40.492088238Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 6.097579667s" Jun 20 19:17:40.492124 containerd[1580]: time="2025-06-20T19:17:40.492121040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:17:40.492761 containerd[1580]: time="2025-06-20T19:17:40.492721296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:17:42.447544 containerd[1580]: time="2025-06-20T19:17:42.447453845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:42.456769 containerd[1580]: time="2025-06-20T19:17:42.456682956Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 20 19:17:42.481490 containerd[1580]: time="2025-06-20T19:17:42.481444983Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:42.501566 containerd[1580]: time="2025-06-20T19:17:42.501475044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:42.502930 containerd[1580]: time="2025-06-20T19:17:42.502852748Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.01008795s" Jun 20 19:17:42.502930 containerd[1580]: time="2025-06-20T19:17:42.502911348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:17:42.503599 containerd[1580]: time="2025-06-20T19:17:42.503556878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:17:43.773031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514938602.mount: Deactivated successfully. Jun 20 19:17:45.335991 containerd[1580]: time="2025-06-20T19:17:45.335896588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:45.385111 containerd[1580]: time="2025-06-20T19:17:45.385034026Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 20 19:17:45.445038 containerd[1580]: time="2025-06-20T19:17:45.444962184Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:45.507677 containerd[1580]: time="2025-06-20T19:17:45.507578174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:45.508500 containerd[1580]: time="2025-06-20T19:17:45.508466971Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 3.004877311s" Jun 20 19:17:45.508576 containerd[1580]: time="2025-06-20T19:17:45.508499963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:17:45.509044 containerd[1580]: time="2025-06-20T19:17:45.509019818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:17:46.513192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:17:46.515748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:17:46.780219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:17:46.805714 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:17:47.666464 kubelet[2094]: E0620 19:17:47.666380 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:17:47.670444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:17:47.670679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:17:47.671077 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.7M memory peak. Jun 20 19:17:51.230044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485441445.mount: Deactivated successfully. Jun 20 19:17:53.404376 containerd[1580]: time="2025-06-20T19:17:53.404296460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:53.405094 containerd[1580]: time="2025-06-20T19:17:53.405020528Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:17:53.406291 containerd[1580]: time="2025-06-20T19:17:53.406241729Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:53.409491 containerd[1580]: time="2025-06-20T19:17:53.409431181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:53.410388 containerd[1580]: time="2025-06-20T19:17:53.410345997Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 7.901296744s" Jun 20 19:17:53.410452 containerd[1580]: time="2025-06-20T19:17:53.410388998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:17:53.411032 containerd[1580]: time="2025-06-20T19:17:53.410872074Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:17:53.926949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754024938.mount: Deactivated successfully. Jun 20 19:17:53.933833 containerd[1580]: time="2025-06-20T19:17:53.933774720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:17:53.934563 containerd[1580]: time="2025-06-20T19:17:53.934532762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:17:53.935796 containerd[1580]: time="2025-06-20T19:17:53.935763811Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:17:53.937867 containerd[1580]: time="2025-06-20T19:17:53.937829105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:17:53.938515 containerd[1580]: time="2025-06-20T19:17:53.938483653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.57545ms" Jun 20 19:17:53.938515 containerd[1580]: time="2025-06-20T19:17:53.938511355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:17:53.939026 containerd[1580]: time="2025-06-20T19:17:53.938998669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:17:54.534003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296051228.mount: Deactivated successfully. Jun 20 19:17:56.852348 containerd[1580]: time="2025-06-20T19:17:56.852264348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:56.855328 containerd[1580]: time="2025-06-20T19:17:56.855264010Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 20 19:17:56.858116 containerd[1580]: time="2025-06-20T19:17:56.858075863Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:56.863768 containerd[1580]: time="2025-06-20T19:17:56.863717392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:17:56.864839 containerd[1580]: time="2025-06-20T19:17:56.864776713Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.925745713s" Jun 20 19:17:56.864839 containerd[1580]: time="2025-06-20T19:17:56.864826689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:17:57.720586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:17:57.722161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:17:57.926789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:17:57.944675 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:17:58.079794 kubelet[2245]: E0620 19:17:58.079726 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:17:58.083778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:17:58.083969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:17:58.084359 systemd[1]: kubelet.service: Consumed 314ms CPU time, 110.5M memory peak. Jun 20 19:17:59.273925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:17:59.274148 systemd[1]: kubelet.service: Consumed 314ms CPU time, 110.5M memory peak. Jun 20 19:17:59.276486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:17:59.311164 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Jun 20 19:17:59.311189 systemd[1]: Reloading... Jun 20 19:17:59.394345 zram_generator::config[2303]: No configuration found. Jun 20 19:18:00.163931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:18:00.307569 systemd[1]: Reloading finished in 995 ms. Jun 20 19:18:00.370247 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:18:00.370378 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:18:00.370713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:18:00.370761 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.5M memory peak. Jun 20 19:18:00.372431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:18:00.556209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:18:00.570695 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:18:00.614036 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:18:00.614036 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:18:00.614036 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:18:00.614570 kubelet[2352]: I0620 19:18:00.614087 2352 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:18:01.319768 kubelet[2352]: I0620 19:18:01.319695 2352 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:18:01.319768 kubelet[2352]: I0620 19:18:01.319740 2352 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:18:01.320020 kubelet[2352]: I0620 19:18:01.319994 2352 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:18:01.428737 kubelet[2352]: E0620 19:18:01.428672 2352 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:01.440901 kubelet[2352]: I0620 19:18:01.440849 2352 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:18:01.450076 kubelet[2352]: I0620 19:18:01.450037 2352 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:18:01.471886 kubelet[2352]: I0620 19:18:01.471830 2352 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:18:01.472780 kubelet[2352]: I0620 19:18:01.472745 2352 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:18:01.472979 kubelet[2352]: I0620 19:18:01.472929 2352 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:18:01.473144 kubelet[2352]: I0620 19:18:01.472967 2352 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:18:01.473250 kubelet[2352]: I0620 19:18:01.473146 2352 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:18:01.473250 kubelet[2352]: I0620 19:18:01.473155 2352 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:18:01.473303 kubelet[2352]: I0620 19:18:01.473277 2352 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:18:01.476610 kubelet[2352]: I0620 19:18:01.476542 2352 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:18:01.476610 kubelet[2352]: I0620 19:18:01.476606 2352 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:18:01.476799 kubelet[2352]: I0620 19:18:01.476655 2352 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:18:01.476799 kubelet[2352]: I0620 19:18:01.476683 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:18:01.480843 kubelet[2352]: I0620 19:18:01.480819 2352 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:18:01.481294 kubelet[2352]: I0620 19:18:01.481238 2352 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:18:01.481901 kubelet[2352]: W0620 19:18:01.481873 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:18:01.483694 kubelet[2352]: W0620 19:18:01.483626 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:01.483744 kubelet[2352]: E0620 19:18:01.483705 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:01.484191 kubelet[2352]: W0620 19:18:01.484120 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:01.484242 kubelet[2352]: E0620 19:18:01.484203 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:01.496299 kubelet[2352]: I0620 19:18:01.496244 2352 server.go:1274] "Started kubelet" Jun 20 19:18:01.496663 kubelet[2352]: I0620 19:18:01.496585 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:18:01.497134 kubelet[2352]: I0620 19:18:01.497103 2352 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:18:01.497227 kubelet[2352]: I0620 19:18:01.497188 2352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:18:01.497658 kubelet[2352]: I0620 19:18:01.497629 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:18:01.498529 kubelet[2352]: I0620 19:18:01.498483 2352 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:18:01.526074 kubelet[2352]: I0620 19:18:01.526007 2352 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:18:01.528206 kubelet[2352]: I0620 19:18:01.528162 2352 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:18:01.528464 kubelet[2352]: E0620 19:18:01.528432 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:01.529079 kubelet[2352]: I0620 19:18:01.528961 2352 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:18:01.529079 kubelet[2352]: I0620 19:18:01.529050 2352 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:18:01.529413 kubelet[2352]: W0620 19:18:01.529358 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:01.529494 kubelet[2352]: E0620 19:18:01.529408 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:01.529494 kubelet[2352]: E0620 19:18:01.529459 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Jun 20 19:18:01.531602 kubelet[2352]: I0620 19:18:01.531099 2352 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:18:01.531602 kubelet[2352]: I0620 19:18:01.531183 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:18:01.532655 kubelet[2352]: I0620 19:18:01.532620 2352 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:18:01.565587 kubelet[2352]: I0620 19:18:01.565480 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:18:01.567340 kubelet[2352]: I0620 19:18:01.566979 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:18:01.567340 kubelet[2352]: I0620 19:18:01.567003 2352 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:18:01.567340 kubelet[2352]: I0620 19:18:01.567029 2352 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:18:01.567340 kubelet[2352]: E0620 19:18:01.567080 2352 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:18:01.571829 kubelet[2352]: W0620 19:18:01.570789 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:01.572077 kubelet[2352]: E0620 19:18:01.571851 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:01.572835 kubelet[2352]: I0620 19:18:01.572763 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:18:01.572835 kubelet[2352]: I0620 19:18:01.572786 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:18:01.572835 kubelet[2352]: I0620 19:18:01.572805 2352 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:18:01.585882 kubelet[2352]: E0620 19:18:01.581300 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad658aab376b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:18:01.496213172 +0000 UTC m=+0.920263363,LastTimestamp:2025-06-20 19:18:01.496213172 +0000 UTC m=+0.920263363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:18:01.629498 kubelet[2352]: E0620 19:18:01.629404 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:01.668034 kubelet[2352]: E0620 19:18:01.667924 2352 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:18:01.730391 kubelet[2352]: E0620 19:18:01.730304 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:01.730954 kubelet[2352]: E0620 19:18:01.730881 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Jun 20 19:18:01.831410 kubelet[2352]: E0620 19:18:01.831187 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:01.868607 kubelet[2352]: E0620 19:18:01.868509 2352 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:18:01.932129 kubelet[2352]: E0620 19:18:01.932071 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.033008 kubelet[2352]: E0620 19:18:02.032961 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.132259 kubelet[2352]: E0620 19:18:02.132078 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Jun 20 19:18:02.133113 kubelet[2352]: E0620 19:18:02.133044 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.233711 kubelet[2352]: E0620 19:18:02.233627 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.268812 kubelet[2352]: E0620 19:18:02.268748 2352 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:18:02.334409 kubelet[2352]: E0620 19:18:02.334308 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.435272 kubelet[2352]: E0620 19:18:02.435042 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.535667 kubelet[2352]: E0620 19:18:02.535585 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.633841 kubelet[2352]: W0620 19:18:02.633750 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:02.633841 kubelet[2352]: E0620 19:18:02.633839 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:02.636131 kubelet[2352]: E0620 19:18:02.636099 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.691861 kubelet[2352]: W0620 19:18:02.691756 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:02.691861 kubelet[2352]: E0620 19:18:02.691811 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:02.736424 kubelet[2352]: E0620 19:18:02.736362 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.748302 kubelet[2352]: E0620 19:18:02.748171 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad658aab376b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:18:01.496213172 +0000 UTC m=+0.920263363,LastTimestamp:2025-06-20 19:18:01.496213172 +0000 UTC m=+0.920263363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:18:02.795649 kubelet[2352]: W0620 19:18:02.795572 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:02.795649 kubelet[2352]: E0620 19:18:02.795631 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:02.837227 kubelet[2352]: E0620 19:18:02.837154 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:02.933037 kubelet[2352]: E0620 19:18:02.932952 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Jun 20 19:18:02.938176 kubelet[2352]: E0620 19:18:02.938116 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.038993 kubelet[2352]: E0620 19:18:03.038937 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.069182 kubelet[2352]: E0620 19:18:03.069130 2352 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:18:03.105892 kubelet[2352]: W0620 19:18:03.105827 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:03.105965 kubelet[2352]: E0620 19:18:03.105894 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:03.139499 kubelet[2352]: E0620 19:18:03.139430 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.239962 kubelet[2352]: E0620 19:18:03.239904 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.340584 kubelet[2352]: E0620 19:18:03.340390 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.441044 kubelet[2352]: E0620 19:18:03.440970 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.520344 kubelet[2352]: E0620 19:18:03.520244 2352 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:03.541938 kubelet[2352]: E0620 19:18:03.541876 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.642797 kubelet[2352]: E0620 19:18:03.642662 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.743225 kubelet[2352]: E0620 19:18:03.743183 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.843683 kubelet[2352]: E0620 19:18:03.843638 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:03.944365 kubelet[2352]: E0620 19:18:03.944193 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.045143 kubelet[2352]: E0620 19:18:04.045056 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.145609 kubelet[2352]: E0620 19:18:04.145527 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.246045 kubelet[2352]: E0620 19:18:04.245883 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.346287 kubelet[2352]: E0620 19:18:04.346230 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.446784 kubelet[2352]: E0620 19:18:04.446728 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.528282 kubelet[2352]: W0620 19:18:04.528237 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:04.528282 kubelet[2352]: E0620 19:18:04.528277 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:04.533801 kubelet[2352]: E0620 19:18:04.533750 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="3.2s" Jun 20 19:18:04.546865 kubelet[2352]: E0620 19:18:04.546831 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.647346 kubelet[2352]: E0620 19:18:04.647260 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.669515 kubelet[2352]: E0620 19:18:04.669456 2352 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:18:04.747978 kubelet[2352]: E0620 19:18:04.747919 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.848583 kubelet[2352]: E0620 19:18:04.848427 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:04.949132 kubelet[2352]: E0620 19:18:04.949041 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:05.025331 kubelet[2352]: I0620 19:18:05.025273 2352 policy_none.go:49] "None policy: Start" Jun 20 19:18:05.026219 kubelet[2352]: I0620 19:18:05.026195 2352 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:18:05.026276 kubelet[2352]: I0620 19:18:05.026232 2352 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:18:05.049737 kubelet[2352]: E0620 19:18:05.049663 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:05.144600 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:18:05.150535 kubelet[2352]: E0620 19:18:05.150478 2352 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:05.170210 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:18:05.195401 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:18:05.196941 kubelet[2352]: I0620 19:18:05.196908 2352 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:18:05.197229 kubelet[2352]: I0620 19:18:05.197174 2352 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:18:05.197229 kubelet[2352]: I0620 19:18:05.197194 2352 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:18:05.197458 kubelet[2352]: I0620 19:18:05.197442 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:18:05.198827 kubelet[2352]: E0620 19:18:05.198796 2352 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:18:05.298836 kubelet[2352]: I0620 19:18:05.298768 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:05.299329 kubelet[2352]: E0620 19:18:05.299271 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Jun 20 19:18:05.453140 kubelet[2352]: W0620 19:18:05.452976 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:05.453140 kubelet[2352]: E0620 19:18:05.453055 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:05.501658 kubelet[2352]: I0620 19:18:05.501622 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:05.502073 kubelet[2352]: E0620 19:18:05.502015 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Jun 20 19:18:05.561149 kubelet[2352]: W0620 19:18:05.561086 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:05.561296 kubelet[2352]: E0620 19:18:05.561151 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:05.904268 kubelet[2352]: I0620 19:18:05.904204 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:05.904823 kubelet[2352]: E0620 19:18:05.904758 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Jun 20 19:18:06.286789 kubelet[2352]: W0620 19:18:06.286660 2352 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Jun 20 19:18:06.286789 kubelet[2352]: E0620 19:18:06.286791 2352 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:06.706649 kubelet[2352]: I0620 19:18:06.706512 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:06.707078 kubelet[2352]: E0620 19:18:06.707014 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Jun 20 19:18:07.674128 update_engine[1519]: I20250620 19:18:07.673967 1519 update_attempter.cc:509] Updating boot flags... Jun 20 19:18:07.729640 kubelet[2352]: E0620 19:18:07.729094 2352 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:18:07.734763 kubelet[2352]: E0620 19:18:07.734724 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="6.4s" Jun 20 19:18:07.904922 systemd[1]: Created slice kubepods-burstable-pod1affedcc26e692d0718e4242a10776f5.slice - libcontainer container kubepods-burstable-pod1affedcc26e692d0718e4242a10776f5.slice. Jun 20 19:18:07.912029 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jun 20 19:18:07.946922 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jun 20 19:18:07.968209 kubelet[2352]: I0620 19:18:07.968147 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:07.968355 kubelet[2352]: I0620 19:18:07.968216 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:07.968355 kubelet[2352]: I0620 19:18:07.968244 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:07.968355 kubelet[2352]: I0620 19:18:07.968267 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:07.968478 kubelet[2352]: I0620 19:18:07.968291 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:18:07.968478 kubelet[2352]: I0620 19:18:07.968392 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:07.968478 kubelet[2352]: I0620 19:18:07.968430 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:07.968478 kubelet[2352]: I0620 19:18:07.968457 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:07.968609 kubelet[2352]: I0620 19:18:07.968492 2352 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:08.243755 kubelet[2352]: E0620 19:18:08.243556 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.244128 kubelet[2352]: E0620 19:18:08.244050 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.244683 containerd[1580]: time="2025-06-20T19:18:08.244609313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:08.245137 containerd[1580]: time="2025-06-20T19:18:08.244641924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1affedcc26e692d0718e4242a10776f5,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:08.250788 kubelet[2352]: E0620 19:18:08.250738 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.251079 containerd[1580]: time="2025-06-20T19:18:08.251053435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:08.310340 kubelet[2352]: I0620 19:18:08.310273 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:08.310890 kubelet[2352]: E0620 19:18:08.310828 2352 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Jun 20 19:18:08.314368 containerd[1580]: time="2025-06-20T19:18:08.313953186Z" level=info msg="connecting to shim ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8" address="unix:///run/containerd/s/cd2e2195e093508a0d615f63a375dad10127b052bb8db663314b3d5731ba23b8" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:08.317667 containerd[1580]: time="2025-06-20T19:18:08.317605527Z" level=info msg="connecting to shim d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511" address="unix:///run/containerd/s/75bc200505917414cac12114590df196261da98e0e27ba6505a4b75c4e274e6b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:08.331813 containerd[1580]: time="2025-06-20T19:18:08.331756183Z" level=info msg="connecting to shim 4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36" address="unix:///run/containerd/s/602b0ebe151558b84879ea0f370d49e26d71e68f3ba2f90426c9cda3e51fcc03" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:08.396517 systemd[1]: Started cri-containerd-d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511.scope - libcontainer container d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511. Jun 20 19:18:08.398167 systemd[1]: Started cri-containerd-ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8.scope - libcontainer container ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8. Jun 20 19:18:08.402372 systemd[1]: Started cri-containerd-4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36.scope - libcontainer container 4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36. Jun 20 19:18:08.531996 containerd[1580]: time="2025-06-20T19:18:08.531938104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1affedcc26e692d0718e4242a10776f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36\"" Jun 20 19:18:08.533118 kubelet[2352]: E0620 19:18:08.533071 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.534726 containerd[1580]: time="2025-06-20T19:18:08.534696641Z" level=info msg="CreateContainer within sandbox \"4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:18:08.543033 containerd[1580]: time="2025-06-20T19:18:08.542954391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8\"" Jun 20 19:18:08.543623 kubelet[2352]: E0620 19:18:08.543599 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.544781 containerd[1580]: time="2025-06-20T19:18:08.544743150Z" level=info msg="CreateContainer within sandbox \"ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:18:08.554872 containerd[1580]: time="2025-06-20T19:18:08.554829805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511\"" Jun 20 19:18:08.555341 kubelet[2352]: E0620 19:18:08.555295 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:08.556924 containerd[1580]: time="2025-06-20T19:18:08.556896282Z" level=info msg="CreateContainer within sandbox \"d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:18:08.560127 containerd[1580]: time="2025-06-20T19:18:08.560077921Z" level=info msg="Container 4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:08.598076 containerd[1580]: time="2025-06-20T19:18:08.598000087Z" level=info msg="Container caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:08.608853 containerd[1580]: time="2025-06-20T19:18:08.608813679Z" level=info msg="Container 90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:08.610008 containerd[1580]: time="2025-06-20T19:18:08.609955954Z" level=info msg="CreateContainer within sandbox \"4b02d2d59ecfbb8cbd6d4623e36e666d533c357ae18dd80774445f8b5d2eff36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357\"" Jun 20 19:18:08.610549 containerd[1580]: time="2025-06-20T19:18:08.610523199Z" level=info msg="StartContainer for \"4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357\"" Jun 20 19:18:08.611983 containerd[1580]: time="2025-06-20T19:18:08.611942208Z" level=info msg="connecting to shim 4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357" address="unix:///run/containerd/s/602b0ebe151558b84879ea0f370d49e26d71e68f3ba2f90426c9cda3e51fcc03" protocol=ttrpc version=3 Jun 20 19:18:08.630323 containerd[1580]: time="2025-06-20T19:18:08.630236237Z" level=info msg="CreateContainer within sandbox \"ec260ebf4acb61a01d5cd3fa1ee35273bb346535e8ac44ac1c2bd7b14568c0b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291\"" Jun 20 19:18:08.630746 containerd[1580]: time="2025-06-20T19:18:08.630694325Z" level=info msg="StartContainer for \"caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291\"" Jun 20 19:18:08.632088 containerd[1580]: time="2025-06-20T19:18:08.632056877Z" level=info msg="connecting to shim caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291" address="unix:///run/containerd/s/cd2e2195e093508a0d615f63a375dad10127b052bb8db663314b3d5731ba23b8" protocol=ttrpc version=3 Jun 20 19:18:08.632858 containerd[1580]: time="2025-06-20T19:18:08.632757044Z" level=info msg="CreateContainer within sandbox \"d7f5a85375ff60fc3cd00f330c2ad3b510d32c3f4610c8f2dda7db34e64ee511\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d\"" Jun 20 19:18:08.633241 containerd[1580]: time="2025-06-20T19:18:08.633195614Z" level=info msg="StartContainer for \"90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d\"" Jun 20 19:18:08.634555 containerd[1580]: time="2025-06-20T19:18:08.634507561Z" level=info msg="connecting to shim 90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d" address="unix:///run/containerd/s/75bc200505917414cac12114590df196261da98e0e27ba6505a4b75c4e274e6b" protocol=ttrpc version=3 Jun 20 19:18:08.637611 systemd[1]: Started cri-containerd-4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357.scope - libcontainer container 4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357. Jun 20 19:18:08.680609 systemd[1]: Started cri-containerd-90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d.scope - libcontainer container 90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d. Jun 20 19:18:08.682146 systemd[1]: Started cri-containerd-caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291.scope - libcontainer container caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291. Jun 20 19:18:09.009613 containerd[1580]: time="2025-06-20T19:18:09.009484352Z" level=info msg="StartContainer for \"90c55894f97589543eb4b93c0cf226393fd540f1adf6489862d57f2d86e4352d\" returns successfully" Jun 20 19:18:09.009727 containerd[1580]: time="2025-06-20T19:18:09.009642371Z" level=info msg="StartContainer for \"4b9b465cad54e5348a621cdd9aedaded9c88a630875ecab834d85638eca8c357\" returns successfully" Jun 20 19:18:09.009727 containerd[1580]: time="2025-06-20T19:18:09.009697035Z" level=info msg="StartContainer for \"caeab3b8ce300901520390d056d6a3de82e0e6a8f38117e2c397587850268291\" returns successfully" Jun 20 19:18:09.595603 kubelet[2352]: E0620 19:18:09.595525 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:09.599012 kubelet[2352]: E0620 19:18:09.598855 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:09.600836 kubelet[2352]: E0620 19:18:09.600802 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:10.524996 kubelet[2352]: I0620 19:18:10.524916 2352 apiserver.go:52] "Watching apiserver" Jun 20 19:18:10.529194 kubelet[2352]: I0620 19:18:10.529155 2352 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:18:10.601983 kubelet[2352]: E0620 19:18:10.601928 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:10.602462 kubelet[2352]: E0620 19:18:10.602004 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:10.602462 kubelet[2352]: E0620 19:18:10.602167 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:10.985265 kubelet[2352]: E0620 19:18:10.985114 2352 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 20 19:18:11.512533 kubelet[2352]: I0620 19:18:11.512476 2352 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:11.602360 kubelet[2352]: E0620 19:18:11.602305 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:11.663421 kubelet[2352]: I0620 19:18:11.663361 2352 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:18:11.663421 kubelet[2352]: E0620 19:18:11.663410 2352 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:18:13.615607 kubelet[2352]: E0620 19:18:13.615558 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:14.386360 kubelet[2352]: E0620 19:18:14.386289 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:14.606901 kubelet[2352]: E0620 19:18:14.606864 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:14.607351 kubelet[2352]: E0620 19:18:14.607334 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:17.314230 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-7.scope)... Jun 20 19:18:17.314251 systemd[1]: Reloading... Jun 20 19:18:17.414387 zram_generator::config[2690]: No configuration found. Jun 20 19:18:17.537597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:18:17.638612 kubelet[2352]: E0620 19:18:17.637280 2352 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:17.697621 systemd[1]: Reloading finished in 382 ms. Jun 20 19:18:17.725594 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:18:17.739876 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:18:17.740202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:18:17.740273 systemd[1]: kubelet.service: Consumed 1.642s CPU time, 134.6M memory peak. Jun 20 19:18:17.743341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:18:17.958045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:18:17.964177 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:18:18.014274 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:18:18.014274 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:18:18.014274 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:18:18.014715 kubelet[2735]: I0620 19:18:18.014370 2735 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:18:18.020615 kubelet[2735]: I0620 19:18:18.020578 2735 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:18:18.020615 kubelet[2735]: I0620 19:18:18.020599 2735 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:18:18.020846 kubelet[2735]: I0620 19:18:18.020827 2735 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:18:18.022045 kubelet[2735]: I0620 19:18:18.022022 2735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:18:18.023839 kubelet[2735]: I0620 19:18:18.023815 2735 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:18:18.030507 kubelet[2735]: I0620 19:18:18.030467 2735 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:18:18.036328 kubelet[2735]: I0620 19:18:18.036266 2735 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:18:18.036507 kubelet[2735]: I0620 19:18:18.036407 2735 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:18:18.036580 kubelet[2735]: I0620 19:18:18.036535 2735 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:18:18.036755 kubelet[2735]: I0620 19:18:18.036567 2735 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:18:18.036755 kubelet[2735]: I0620 19:18:18.036756 2735 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:18:18.036911 kubelet[2735]: I0620 19:18:18.036766 2735 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:18:18.036911 kubelet[2735]: I0620 19:18:18.036792 2735 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:18:18.036911 kubelet[2735]: I0620 19:18:18.036901 2735 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:18:18.036911 kubelet[2735]: I0620 19:18:18.036913 2735 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:18:18.037005 kubelet[2735]: I0620 19:18:18.036946 2735 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:18:18.037005 kubelet[2735]: I0620 19:18:18.036961 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:18:18.038413 kubelet[2735]: I0620 19:18:18.038364 2735 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:18:18.038797 kubelet[2735]: I0620 19:18:18.038712 2735 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:18:18.039128 kubelet[2735]: I0620 19:18:18.039093 2735 server.go:1274] "Started kubelet" Jun 20 19:18:18.039654 kubelet[2735]: I0620 19:18:18.039557 2735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:18:18.040475 kubelet[2735]: I0620 19:18:18.040387 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:18:18.040724 kubelet[2735]: I0620 19:18:18.040700 2735 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:18:18.041981 kubelet[2735]: I0620 19:18:18.041951 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:18:18.042424 kubelet[2735]: I0620 19:18:18.042350 2735 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:18:18.044362 kubelet[2735]: I0620 19:18:18.043961 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:18:18.046860 kubelet[2735]: E0620 19:18:18.046837 2735 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:18:18.046944 kubelet[2735]: I0620 19:18:18.046918 2735 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:18:18.047270 kubelet[2735]: I0620 19:18:18.047249 2735 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:18:18.047488 kubelet[2735]: I0620 19:18:18.047463 2735 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:18:18.049621 kubelet[2735]: I0620 19:18:18.049586 2735 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:18:18.049758 kubelet[2735]: I0620 19:18:18.049725 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:18:18.051657 kubelet[2735]: I0620 19:18:18.051607 2735 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:18:18.065511 kubelet[2735]: I0620 19:18:18.065409 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:18:18.067540 kubelet[2735]: I0620 19:18:18.067496 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:18:18.067540 kubelet[2735]: I0620 19:18:18.067522 2735 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:18:18.067540 kubelet[2735]: I0620 19:18:18.067542 2735 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:18:18.067717 kubelet[2735]: E0620 19:18:18.067619 2735 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:18:18.086139 kubelet[2735]: I0620 19:18:18.086107 2735 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:18:18.086139 kubelet[2735]: I0620 19:18:18.086126 2735 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:18:18.086139 kubelet[2735]: I0620 19:18:18.086144 2735 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:18:18.086374 kubelet[2735]: I0620 19:18:18.086301 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:18:18.086374 kubelet[2735]: I0620 19:18:18.086325 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:18:18.086374 kubelet[2735]: I0620 19:18:18.086352 2735 policy_none.go:49] "None policy: Start" Jun 20 19:18:18.086859 kubelet[2735]: I0620 19:18:18.086845 2735 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:18:18.086897 kubelet[2735]: I0620 19:18:18.086866 2735 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:18:18.087002 kubelet[2735]: I0620 19:18:18.086988 2735 state_mem.go:75] "Updated machine memory state" Jun 20 19:18:18.093367 kubelet[2735]: I0620 19:18:18.093214 2735 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:18:18.093503 kubelet[2735]: I0620 19:18:18.093489 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:18:18.093531 kubelet[2735]: I0620 19:18:18.093503 2735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:18:18.094270 kubelet[2735]: I0620 19:18:18.093856 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:18:18.199563 kubelet[2735]: I0620 19:18:18.199516 2735 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:18:18.271532 kubelet[2735]: E0620 19:18:18.270952 2735 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.272065 kubelet[2735]: E0620 19:18:18.271981 2735 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:18:18.272065 kubelet[2735]: E0620 19:18:18.272041 2735 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:18.349050 kubelet[2735]: I0620 19:18:18.348990 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.349050 kubelet[2735]: I0620 19:18:18.349056 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.349254 kubelet[2735]: I0620 19:18:18.349084 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:18.349254 kubelet[2735]: I0620 19:18:18.349105 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:18.349254 kubelet[2735]: I0620 19:18:18.349125 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1affedcc26e692d0718e4242a10776f5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1affedcc26e692d0718e4242a10776f5\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:18.349254 kubelet[2735]: I0620 19:18:18.349147 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.349254 kubelet[2735]: I0620 19:18:18.349170 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.349523 kubelet[2735]: I0620 19:18:18.349192 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:18.349523 kubelet[2735]: I0620 19:18:18.349210 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:18:18.472818 kubelet[2735]: I0620 19:18:18.472707 2735 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jun 20 19:18:18.472999 kubelet[2735]: I0620 19:18:18.472886 2735 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:18:18.573186 kubelet[2735]: E0620 19:18:18.573024 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:18.573186 kubelet[2735]: E0620 19:18:18.573085 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:18.573186 kubelet[2735]: E0620 19:18:18.573024 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:19.038002 kubelet[2735]: I0620 19:18:19.037969 2735 apiserver.go:52] "Watching apiserver" Jun 20 19:18:19.047760 kubelet[2735]: I0620 19:18:19.047735 2735 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:18:19.081786 kubelet[2735]: E0620 19:18:19.081744 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:19.429268 kubelet[2735]: E0620 19:18:19.428195 2735 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:18:19.429268 kubelet[2735]: E0620 19:18:19.428761 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:19.429687 kubelet[2735]: E0620 19:18:19.429305 2735 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:18:19.429687 kubelet[2735]: E0620 19:18:19.429524 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:19.637912 kubelet[2735]: I0620 19:18:19.637716 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.637692725 podStartE2EDuration="5.637692725s" podCreationTimestamp="2025-06-20 19:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:19.428521248 +0000 UTC m=+1.458803278" watchObservedRunningTime="2025-06-20 19:18:19.637692725 +0000 UTC m=+1.667974755" Jun 20 19:18:19.637912 kubelet[2735]: I0620 19:18:19.637890 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.63788102 podStartE2EDuration="2.63788102s" podCreationTimestamp="2025-06-20 19:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:19.637533144 +0000 UTC m=+1.667815174" watchObservedRunningTime="2025-06-20 19:18:19.63788102 +0000 UTC m=+1.668163060" Jun 20 19:18:19.911362 kubelet[2735]: I0620 19:18:19.911141 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.9111213320000005 podStartE2EDuration="6.911121332s" podCreationTimestamp="2025-06-20 19:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:19.711795435 +0000 UTC m=+1.742077485" watchObservedRunningTime="2025-06-20 19:18:19.911121332 +0000 UTC m=+1.941403362" Jun 20 19:18:20.082801 kubelet[2735]: E0620 19:18:20.082764 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:20.083230 kubelet[2735]: E0620 19:18:20.082843 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:21.595410 kubelet[2735]: I0620 19:18:21.595362 2735 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:18:21.595961 containerd[1580]: time="2025-06-20T19:18:21.595848032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:18:21.596230 kubelet[2735]: I0620 19:18:21.596151 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:18:22.389204 kubelet[2735]: E0620 19:18:22.389148 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:22.970988 systemd[1]: Created slice kubepods-besteffort-pod0933eab1_bfc0_4f76_9dc0_9dc8eb58ca84.slice - libcontainer container kubepods-besteffort-pod0933eab1_bfc0_4f76_9dc0_9dc8eb58ca84.slice. Jun 20 19:18:22.974775 kubelet[2735]: I0620 19:18:22.974743 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84-kube-proxy\") pod \"kube-proxy-nq5qd\" (UID: \"0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84\") " pod="kube-system/kube-proxy-nq5qd" Jun 20 19:18:22.975065 kubelet[2735]: I0620 19:18:22.974784 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84-xtables-lock\") pod \"kube-proxy-nq5qd\" (UID: \"0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84\") " pod="kube-system/kube-proxy-nq5qd" Jun 20 19:18:22.975065 kubelet[2735]: I0620 19:18:22.974811 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84-lib-modules\") pod \"kube-proxy-nq5qd\" (UID: \"0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84\") " pod="kube-system/kube-proxy-nq5qd" Jun 20 19:18:22.975065 kubelet[2735]: I0620 19:18:22.974833 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqsq\" (UniqueName: \"kubernetes.io/projected/0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84-kube-api-access-qnqsq\") pod \"kube-proxy-nq5qd\" (UID: \"0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84\") " pod="kube-system/kube-proxy-nq5qd" Jun 20 19:18:23.087818 kubelet[2735]: E0620 19:18:23.087774 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:23.099639 kubelet[2735]: W0620 19:18:23.099555 2735 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 20 19:18:23.100466 kubelet[2735]: W0620 19:18:23.099555 2735 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jun 20 19:18:23.100466 kubelet[2735]: E0620 19:18:23.100417 2735 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jun 20 19:18:23.100466 kubelet[2735]: E0620 19:18:23.100309 2735 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jun 20 19:18:23.105185 systemd[1]: Created slice kubepods-besteffort-podcb35ad4b_fe6b_4461_ae00_a03386189e4e.slice - libcontainer container kubepods-besteffort-podcb35ad4b_fe6b_4461_ae00_a03386189e4e.slice. Jun 20 19:18:23.175781 kubelet[2735]: I0620 19:18:23.175731 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb35ad4b-fe6b-4461-ae00-a03386189e4e-var-lib-calico\") pod \"tigera-operator-6c78c649f6-hwgdh\" (UID: \"cb35ad4b-fe6b-4461-ae00-a03386189e4e\") " pod="tigera-operator/tigera-operator-6c78c649f6-hwgdh" Jun 20 19:18:23.175781 kubelet[2735]: I0620 19:18:23.175771 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4rw\" (UniqueName: \"kubernetes.io/projected/cb35ad4b-fe6b-4461-ae00-a03386189e4e-kube-api-access-wm4rw\") pod \"tigera-operator-6c78c649f6-hwgdh\" (UID: \"cb35ad4b-fe6b-4461-ae00-a03386189e4e\") " pod="tigera-operator/tigera-operator-6c78c649f6-hwgdh" Jun 20 19:18:23.286563 kubelet[2735]: E0620 19:18:23.286486 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:23.287355 containerd[1580]: time="2025-06-20T19:18:23.287280890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nq5qd,Uid:0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84,Namespace:kube-system,Attempt:0,}" Jun 20 19:18:23.319356 containerd[1580]: time="2025-06-20T19:18:23.319276300Z" level=info msg="connecting to shim ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0" address="unix:///run/containerd/s/5aa8ba8caf4cf5e006a3384c6b4c85f83dcc521bdd1827c87668e4579e7fd61e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:23.352529 systemd[1]: Started cri-containerd-ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0.scope - libcontainer container ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0. Jun 20 19:18:23.429163 containerd[1580]: time="2025-06-20T19:18:23.429111406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nq5qd,Uid:0933eab1-bfc0-4f76-9dc0-9dc8eb58ca84,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0\"" Jun 20 19:18:23.430103 kubelet[2735]: E0620 19:18:23.430059 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:23.432583 containerd[1580]: time="2025-06-20T19:18:23.432518181Z" level=info msg="CreateContainer within sandbox \"ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:18:23.786979 containerd[1580]: time="2025-06-20T19:18:23.786921989Z" level=info msg="Container ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:23.940994 containerd[1580]: time="2025-06-20T19:18:23.940927482Z" level=info msg="CreateContainer within sandbox \"ae62477ea06837e6984ca5be36eb280939c54309f5ee25971a772f1a70631ed0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1\"" Jun 20 19:18:23.941779 containerd[1580]: time="2025-06-20T19:18:23.941727599Z" level=info msg="StartContainer for \"ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1\"" Jun 20 19:18:23.943306 containerd[1580]: time="2025-06-20T19:18:23.943267258Z" level=info msg="connecting to shim ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1" address="unix:///run/containerd/s/5aa8ba8caf4cf5e006a3384c6b4c85f83dcc521bdd1827c87668e4579e7fd61e" protocol=ttrpc version=3 Jun 20 19:18:23.974464 systemd[1]: Started cri-containerd-ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1.scope - libcontainer container ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1. Jun 20 19:18:24.043172 containerd[1580]: time="2025-06-20T19:18:24.043006775Z" level=info msg="StartContainer for \"ab8e208b9c3ef92a885ed58e71204f0ddc46f98fe4f40a732857913e953513b1\" returns successfully" Jun 20 19:18:24.091403 kubelet[2735]: E0620 19:18:24.091353 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:24.167036 kubelet[2735]: I0620 19:18:24.166955 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nq5qd" podStartSLOduration=2.166929713 podStartE2EDuration="2.166929713s" podCreationTimestamp="2025-06-20 19:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:18:24.166918943 +0000 UTC m=+6.197200973" watchObservedRunningTime="2025-06-20 19:18:24.166929713 +0000 UTC m=+6.197211744" Jun 20 19:18:24.283279 kubelet[2735]: E0620 19:18:24.283194 2735 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:18:24.283279 kubelet[2735]: E0620 19:18:24.283260 2735 projected.go:194] Error preparing data for projected volume kube-api-access-wm4rw for pod tigera-operator/tigera-operator-6c78c649f6-hwgdh: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:18:24.283484 kubelet[2735]: E0620 19:18:24.283377 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb35ad4b-fe6b-4461-ae00-a03386189e4e-kube-api-access-wm4rw podName:cb35ad4b-fe6b-4461-ae00-a03386189e4e nodeName:}" failed. No retries permitted until 2025-06-20 19:18:24.783350117 +0000 UTC m=+6.813632238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wm4rw" (UniqueName: "kubernetes.io/projected/cb35ad4b-fe6b-4461-ae00-a03386189e4e-kube-api-access-wm4rw") pod "tigera-operator-6c78c649f6-hwgdh" (UID: "cb35ad4b-fe6b-4461-ae00-a03386189e4e") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:18:24.908493 containerd[1580]: time="2025-06-20T19:18:24.908436431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-hwgdh,Uid:cb35ad4b-fe6b-4461-ae00-a03386189e4e,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:18:25.317989 containerd[1580]: time="2025-06-20T19:18:25.317927256Z" level=info msg="connecting to shim 1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd" address="unix:///run/containerd/s/1734e0118ef37586268d043314b6b86ec1487fb2c9a93d7a45a41eee358967f3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:25.354719 systemd[1]: Started cri-containerd-1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd.scope - libcontainer container 1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd. Jun 20 19:18:25.420714 containerd[1580]: time="2025-06-20T19:18:25.420641082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-hwgdh,Uid:cb35ad4b-fe6b-4461-ae00-a03386189e4e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd\"" Jun 20 19:18:25.422654 containerd[1580]: time="2025-06-20T19:18:25.422498868Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:18:26.621421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243494438.mount: Deactivated successfully. Jun 20 19:18:26.993442 kubelet[2735]: E0620 19:18:26.993180 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:27.097202 kubelet[2735]: E0620 19:18:27.097107 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:27.473361 containerd[1580]: time="2025-06-20T19:18:27.473288558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:27.474185 containerd[1580]: time="2025-06-20T19:18:27.474131314Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:18:27.475378 containerd[1580]: time="2025-06-20T19:18:27.475343845Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:27.477412 containerd[1580]: time="2025-06-20T19:18:27.477369526Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:27.478082 containerd[1580]: time="2025-06-20T19:18:27.478048924Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 2.055518747s" Jun 20 19:18:27.478110 containerd[1580]: time="2025-06-20T19:18:27.478084661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:18:27.480237 containerd[1580]: time="2025-06-20T19:18:27.480200373Z" level=info msg="CreateContainer within sandbox \"1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:18:27.489178 containerd[1580]: time="2025-06-20T19:18:27.489125464Z" level=info msg="Container 3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:27.495398 containerd[1580]: time="2025-06-20T19:18:27.495361607Z" level=info msg="CreateContainer within sandbox \"1bef68057a36890750f72bbe3ce0e69d2a9195a7cf4d04df4234c1bd386895fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa\"" Jun 20 19:18:27.496053 containerd[1580]: time="2025-06-20T19:18:27.495799851Z" level=info msg="StartContainer for \"3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa\"" Jun 20 19:18:27.496848 containerd[1580]: time="2025-06-20T19:18:27.496823076Z" level=info msg="connecting to shim 3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa" address="unix:///run/containerd/s/1734e0118ef37586268d043314b6b86ec1487fb2c9a93d7a45a41eee358967f3" protocol=ttrpc version=3 Jun 20 19:18:27.546659 systemd[1]: Started cri-containerd-3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa.scope - libcontainer container 3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa. Jun 20 19:18:27.600600 containerd[1580]: time="2025-06-20T19:18:27.600539468Z" level=info msg="StartContainer for \"3f733a437b9525be80bb551dd4348f4c54d23af2208ba68c2ac63a6e943724fa\" returns successfully" Jun 20 19:18:28.745862 kubelet[2735]: E0620 19:18:28.745784 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:28.952588 kubelet[2735]: I0620 19:18:28.952518 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6c78c649f6-hwgdh" podStartSLOduration=4.895752715 podStartE2EDuration="6.952495646s" podCreationTimestamp="2025-06-20 19:18:22 +0000 UTC" firstStartedPulling="2025-06-20 19:18:25.422058199 +0000 UTC m=+7.452340229" lastFinishedPulling="2025-06-20 19:18:27.47880112 +0000 UTC m=+9.509083160" observedRunningTime="2025-06-20 19:18:28.111566383 +0000 UTC m=+10.141848413" watchObservedRunningTime="2025-06-20 19:18:28.952495646 +0000 UTC m=+10.982777686" Jun 20 19:18:29.100693 kubelet[2735]: E0620 19:18:29.100665 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:33.614776 sudo[1766]: pam_unix(sudo:session): session closed for user root Jun 20 19:18:33.616465 sshd[1765]: Connection closed by 10.0.0.1 port 55000 Jun 20 19:18:33.621292 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jun 20 19:18:33.629040 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:55000.service: Deactivated successfully. Jun 20 19:18:33.631575 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:18:33.631846 systemd[1]: session-7.scope: Consumed 4.998s CPU time, 224M memory peak. Jun 20 19:18:33.633340 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:18:33.635056 systemd-logind[1515]: Removed session 7. Jun 20 19:18:39.192955 systemd[1]: Created slice kubepods-besteffort-pod9c4278ac_5d29_4917_b9cd_88aef0414acb.slice - libcontainer container kubepods-besteffort-pod9c4278ac_5d29_4917_b9cd_88aef0414acb.slice. Jun 20 19:18:39.266807 systemd[1]: Created slice kubepods-besteffort-podf5697ac3_7228_4536_8ff2_b01eb02a15ba.slice - libcontainer container kubepods-besteffort-podf5697ac3_7228_4536_8ff2_b01eb02a15ba.slice. Jun 20 19:18:39.274356 kubelet[2735]: I0620 19:18:39.273837 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c4278ac-5d29-4917-b9cd-88aef0414acb-tigera-ca-bundle\") pod \"calico-typha-7ddfcd6794-xgckf\" (UID: \"9c4278ac-5d29-4917-b9cd-88aef0414acb\") " pod="calico-system/calico-typha-7ddfcd6794-xgckf" Jun 20 19:18:39.274356 kubelet[2735]: I0620 19:18:39.273889 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n76q\" (UniqueName: \"kubernetes.io/projected/9c4278ac-5d29-4917-b9cd-88aef0414acb-kube-api-access-2n76q\") pod \"calico-typha-7ddfcd6794-xgckf\" (UID: \"9c4278ac-5d29-4917-b9cd-88aef0414acb\") " pod="calico-system/calico-typha-7ddfcd6794-xgckf" Jun 20 19:18:39.274356 kubelet[2735]: I0620 19:18:39.273924 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c4278ac-5d29-4917-b9cd-88aef0414acb-typha-certs\") pod \"calico-typha-7ddfcd6794-xgckf\" (UID: \"9c4278ac-5d29-4917-b9cd-88aef0414acb\") " pod="calico-system/calico-typha-7ddfcd6794-xgckf" Jun 20 19:18:39.374808 kubelet[2735]: I0620 19:18:39.374640 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5697ac3-7228-4536-8ff2-b01eb02a15ba-node-certs\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375420 kubelet[2735]: I0620 19:18:39.375087 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-var-run-calico\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375420 kubelet[2735]: I0620 19:18:39.375109 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klh9w\" (UniqueName: \"kubernetes.io/projected/f5697ac3-7228-4536-8ff2-b01eb02a15ba-kube-api-access-klh9w\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375420 kubelet[2735]: I0620 19:18:39.375165 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-var-lib-calico\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375420 kubelet[2735]: I0620 19:18:39.375182 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-cni-bin-dir\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375420 kubelet[2735]: I0620 19:18:39.375199 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-cni-log-dir\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375579 kubelet[2735]: I0620 19:18:39.375213 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-cni-net-dir\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375579 kubelet[2735]: I0620 19:18:39.375228 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-policysync\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375579 kubelet[2735]: I0620 19:18:39.375241 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5697ac3-7228-4536-8ff2-b01eb02a15ba-tigera-ca-bundle\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375579 kubelet[2735]: I0620 19:18:39.375255 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-flexvol-driver-host\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.375579 kubelet[2735]: I0620 19:18:39.375270 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-xtables-lock\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.377089 kubelet[2735]: I0620 19:18:39.375284 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5697ac3-7228-4536-8ff2-b01eb02a15ba-lib-modules\") pod \"calico-node-57qld\" (UID: \"f5697ac3-7228-4536-8ff2-b01eb02a15ba\") " pod="calico-system/calico-node-57qld" Jun 20 19:18:39.403926 kubelet[2735]: E0620 19:18:39.403605 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:39.483853 kubelet[2735]: E0620 19:18:39.483746 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.484556 kubelet[2735]: W0620 19:18:39.484487 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.484556 kubelet[2735]: E0620 19:18:39.484526 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.489147 kubelet[2735]: E0620 19:18:39.489066 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.489147 kubelet[2735]: W0620 19:18:39.489083 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.489147 kubelet[2735]: E0620 19:18:39.489098 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.490521 kubelet[2735]: E0620 19:18:39.490489 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.490521 kubelet[2735]: W0620 19:18:39.490504 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.490673 kubelet[2735]: E0620 19:18:39.490622 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.491029 kubelet[2735]: E0620 19:18:39.490999 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.491029 kubelet[2735]: W0620 19:18:39.491012 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.491149 kubelet[2735]: E0620 19:18:39.491134 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.491463 kubelet[2735]: E0620 19:18:39.491431 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.491463 kubelet[2735]: W0620 19:18:39.491446 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.491636 kubelet[2735]: E0620 19:18:39.491571 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.491883 kubelet[2735]: E0620 19:18:39.491869 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.491971 kubelet[2735]: W0620 19:18:39.491957 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.492037 kubelet[2735]: E0620 19:18:39.492023 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.492302 kubelet[2735]: E0620 19:18:39.492288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.492492 kubelet[2735]: W0620 19:18:39.492380 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.492492 kubelet[2735]: E0620 19:18:39.492393 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.492793 kubelet[2735]: E0620 19:18:39.492727 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.492793 kubelet[2735]: W0620 19:18:39.492740 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.492793 kubelet[2735]: E0620 19:18:39.492751 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.493182 kubelet[2735]: E0620 19:18:39.493111 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.493182 kubelet[2735]: W0620 19:18:39.493126 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.493182 kubelet[2735]: E0620 19:18:39.493137 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.493569 kubelet[2735]: E0620 19:18:39.493497 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.493569 kubelet[2735]: W0620 19:18:39.493511 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.493569 kubelet[2735]: E0620 19:18:39.493522 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.493948 kubelet[2735]: E0620 19:18:39.493869 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.493948 kubelet[2735]: W0620 19:18:39.493883 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.493948 kubelet[2735]: E0620 19:18:39.493905 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.494342 kubelet[2735]: E0620 19:18:39.494259 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.494342 kubelet[2735]: W0620 19:18:39.494273 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.494342 kubelet[2735]: E0620 19:18:39.494284 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.494711 kubelet[2735]: E0620 19:18:39.494641 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.494711 kubelet[2735]: W0620 19:18:39.494654 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.494711 kubelet[2735]: E0620 19:18:39.494665 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.495089 kubelet[2735]: E0620 19:18:39.495017 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.495089 kubelet[2735]: W0620 19:18:39.495030 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.495089 kubelet[2735]: E0620 19:18:39.495041 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.495447 kubelet[2735]: E0620 19:18:39.495381 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.495447 kubelet[2735]: W0620 19:18:39.495394 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.495447 kubelet[2735]: E0620 19:18:39.495407 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.495856 kubelet[2735]: E0620 19:18:39.495737 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.495856 kubelet[2735]: W0620 19:18:39.495751 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.495856 kubelet[2735]: E0620 19:18:39.495762 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.496425 kubelet[2735]: E0620 19:18:39.496410 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.496563 kubelet[2735]: W0620 19:18:39.496499 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.496563 kubelet[2735]: E0620 19:18:39.496516 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.496905 kubelet[2735]: E0620 19:18:39.496875 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.497046 kubelet[2735]: W0620 19:18:39.496971 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.497046 kubelet[2735]: E0620 19:18:39.496991 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.497584 kubelet[2735]: E0620 19:18:39.497517 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.497584 kubelet[2735]: W0620 19:18:39.497531 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.497584 kubelet[2735]: E0620 19:18:39.497542 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.497954 kubelet[2735]: E0620 19:18:39.497871 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.497954 kubelet[2735]: W0620 19:18:39.497885 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.497954 kubelet[2735]: E0620 19:18:39.497909 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.498293 kubelet[2735]: E0620 19:18:39.498228 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.498293 kubelet[2735]: W0620 19:18:39.498242 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.498462 kubelet[2735]: E0620 19:18:39.498253 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.498689 kubelet[2735]: E0620 19:18:39.498675 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.498789 kubelet[2735]: W0620 19:18:39.498756 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.498789 kubelet[2735]: E0620 19:18:39.498771 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.499053 kubelet[2735]: E0620 19:18:39.499033 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:39.501001 containerd[1580]: time="2025-06-20T19:18:39.500946118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddfcd6794-xgckf,Uid:9c4278ac-5d29-4917-b9cd-88aef0414acb,Namespace:calico-system,Attempt:0,}" Jun 20 19:18:39.553212 containerd[1580]: time="2025-06-20T19:18:39.553151928Z" level=info msg="connecting to shim 3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3" address="unix:///run/containerd/s/b6bc143686205376ca613f5913b827f36f621264c4da76bce2ca8e794ef6c50f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:39.571104 containerd[1580]: time="2025-06-20T19:18:39.571054105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57qld,Uid:f5697ac3-7228-4536-8ff2-b01eb02a15ba,Namespace:calico-system,Attempt:0,}" Jun 20 19:18:39.577232 kubelet[2735]: E0620 19:18:39.577193 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.577621 kubelet[2735]: W0620 19:18:39.577360 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.577621 kubelet[2735]: E0620 19:18:39.577389 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.577621 kubelet[2735]: I0620 19:18:39.577424 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqfh\" (UniqueName: \"kubernetes.io/projected/98422ba0-fce0-437e-87ec-c2741bdfac3e-kube-api-access-kdqfh\") pod \"csi-node-driver-6jjgb\" (UID: \"98422ba0-fce0-437e-87ec-c2741bdfac3e\") " pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:18:39.577884 kubelet[2735]: E0620 19:18:39.577849 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.577884 kubelet[2735]: W0620 19:18:39.577865 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.578050 kubelet[2735]: E0620 19:18:39.577998 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.578328 kubelet[2735]: I0620 19:18:39.578265 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/98422ba0-fce0-437e-87ec-c2741bdfac3e-varrun\") pod \"csi-node-driver-6jjgb\" (UID: \"98422ba0-fce0-437e-87ec-c2741bdfac3e\") " pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:18:39.578483 kubelet[2735]: E0620 19:18:39.578467 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.578593 kubelet[2735]: W0620 19:18:39.578576 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.578743 kubelet[2735]: E0620 19:18:39.578676 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.579177 kubelet[2735]: E0620 19:18:39.579077 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.579177 kubelet[2735]: W0620 19:18:39.579134 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.579177 kubelet[2735]: E0620 19:18:39.579153 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.579624 kubelet[2735]: E0620 19:18:39.579589 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.579624 kubelet[2735]: W0620 19:18:39.579605 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.579838 kubelet[2735]: E0620 19:18:39.579743 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.580133 kubelet[2735]: E0620 19:18:39.580102 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.580133 kubelet[2735]: W0620 19:18:39.580116 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.580878 kubelet[2735]: E0620 19:18:39.580293 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.581448 kubelet[2735]: E0620 19:18:39.581429 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.581448 kubelet[2735]: W0620 19:18:39.581444 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.581653 kubelet[2735]: E0620 19:18:39.581456 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.581653 kubelet[2735]: I0620 19:18:39.581484 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422ba0-fce0-437e-87ec-c2741bdfac3e-kubelet-dir\") pod \"csi-node-driver-6jjgb\" (UID: \"98422ba0-fce0-437e-87ec-c2741bdfac3e\") " pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:18:39.582000 kubelet[2735]: E0620 19:18:39.581676 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.582000 kubelet[2735]: W0620 19:18:39.581686 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.582000 kubelet[2735]: E0620 19:18:39.581711 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.582000 kubelet[2735]: I0620 19:18:39.581728 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98422ba0-fce0-437e-87ec-c2741bdfac3e-registration-dir\") pod \"csi-node-driver-6jjgb\" (UID: \"98422ba0-fce0-437e-87ec-c2741bdfac3e\") " pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:18:39.582525 kubelet[2735]: E0620 19:18:39.582210 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.582525 kubelet[2735]: W0620 19:18:39.582228 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.582525 kubelet[2735]: E0620 19:18:39.582391 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586191 kubelet[2735]: E0620 19:18:39.583810 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586191 kubelet[2735]: W0620 19:18:39.583822 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586191 kubelet[2735]: E0620 19:18:39.583843 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586191 kubelet[2735]: E0620 19:18:39.584107 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586191 kubelet[2735]: W0620 19:18:39.584117 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586191 kubelet[2735]: E0620 19:18:39.584134 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586191 kubelet[2735]: I0620 19:18:39.584155 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98422ba0-fce0-437e-87ec-c2741bdfac3e-socket-dir\") pod \"csi-node-driver-6jjgb\" (UID: \"98422ba0-fce0-437e-87ec-c2741bdfac3e\") " pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:18:39.586191 kubelet[2735]: E0620 19:18:39.584444 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586191 kubelet[2735]: W0620 19:18:39.584457 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.584525 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.584736 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586543 kubelet[2735]: W0620 19:18:39.584746 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.584773 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.585020 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586543 kubelet[2735]: W0620 19:18:39.585030 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.585040 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.585371 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.586543 kubelet[2735]: W0620 19:18:39.585379 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.586543 kubelet[2735]: E0620 19:18:39.585399 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.586569 systemd[1]: Started cri-containerd-3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3.scope - libcontainer container 3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3. Jun 20 19:18:39.629233 containerd[1580]: time="2025-06-20T19:18:39.629164556Z" level=info msg="connecting to shim b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63" address="unix:///run/containerd/s/7297845e9420f9a9e9b9eee62b65b76efb181fd02be86fd9b2ecd266e60632cf" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:18:39.642596 containerd[1580]: time="2025-06-20T19:18:39.642534759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddfcd6794-xgckf,Uid:9c4278ac-5d29-4917-b9cd-88aef0414acb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3\"" Jun 20 19:18:39.643590 kubelet[2735]: E0620 19:18:39.643545 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:39.644964 containerd[1580]: time="2025-06-20T19:18:39.644936350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:18:39.663669 systemd[1]: Started cri-containerd-b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63.scope - libcontainer container b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63. Jun 20 19:18:39.685830 kubelet[2735]: E0620 19:18:39.685775 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.685830 kubelet[2735]: W0620 19:18:39.685812 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.685830 kubelet[2735]: E0620 19:18:39.685832 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.686216 kubelet[2735]: E0620 19:18:39.686185 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.686216 kubelet[2735]: W0620 19:18:39.686200 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.686216 kubelet[2735]: E0620 19:18:39.686210 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.686756 kubelet[2735]: E0620 19:18:39.686726 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.686756 kubelet[2735]: W0620 19:18:39.686741 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.686756 kubelet[2735]: E0620 19:18:39.686759 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.687052 kubelet[2735]: E0620 19:18:39.687021 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.687052 kubelet[2735]: W0620 19:18:39.687039 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.687130 kubelet[2735]: E0620 19:18:39.687056 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.687346 kubelet[2735]: E0620 19:18:39.687285 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.687346 kubelet[2735]: W0620 19:18:39.687300 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.687346 kubelet[2735]: E0620 19:18:39.687332 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.687527 kubelet[2735]: E0620 19:18:39.687507 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.687527 kubelet[2735]: W0620 19:18:39.687521 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.687607 kubelet[2735]: E0620 19:18:39.687539 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.688135 kubelet[2735]: E0620 19:18:39.688065 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.688135 kubelet[2735]: W0620 19:18:39.688126 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.688370 kubelet[2735]: E0620 19:18:39.688350 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.688370 kubelet[2735]: E0620 19:18:39.688367 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.688472 kubelet[2735]: W0620 19:18:39.688378 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.688472 kubelet[2735]: E0620 19:18:39.688400 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.688613 kubelet[2735]: E0620 19:18:39.688589 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.688613 kubelet[2735]: W0620 19:18:39.688604 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.688677 kubelet[2735]: E0620 19:18:39.688632 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.688835 kubelet[2735]: E0620 19:18:39.688803 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.688835 kubelet[2735]: W0620 19:18:39.688819 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.689002 kubelet[2735]: E0620 19:18:39.688929 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.689032 kubelet[2735]: E0620 19:18:39.689022 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.689060 kubelet[2735]: W0620 19:18:39.689033 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.689060 kubelet[2735]: E0620 19:18:39.689046 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.689286 kubelet[2735]: E0620 19:18:39.689259 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.689286 kubelet[2735]: W0620 19:18:39.689278 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.689513 kubelet[2735]: E0620 19:18:39.689295 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.689749 kubelet[2735]: E0620 19:18:39.689723 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.689749 kubelet[2735]: W0620 19:18:39.689739 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.689816 kubelet[2735]: E0620 19:18:39.689763 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.690603 kubelet[2735]: E0620 19:18:39.690573 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.690674 kubelet[2735]: W0620 19:18:39.690649 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.690705 kubelet[2735]: E0620 19:18:39.690678 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.691195 kubelet[2735]: E0620 19:18:39.691153 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.691243 kubelet[2735]: W0620 19:18:39.691194 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.691269 kubelet[2735]: E0620 19:18:39.691241 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.691519 kubelet[2735]: E0620 19:18:39.691498 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.691519 kubelet[2735]: W0620 19:18:39.691517 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.691633 kubelet[2735]: E0620 19:18:39.691608 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.691843 kubelet[2735]: E0620 19:18:39.691808 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.691843 kubelet[2735]: W0620 19:18:39.691827 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.692071 kubelet[2735]: E0620 19:18:39.692039 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.692160 kubelet[2735]: E0620 19:18:39.692139 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.692160 kubelet[2735]: W0620 19:18:39.692159 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.692249 kubelet[2735]: E0620 19:18:39.692203 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.692963 kubelet[2735]: E0620 19:18:39.692936 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.692963 kubelet[2735]: W0620 19:18:39.692955 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.693038 kubelet[2735]: E0620 19:18:39.693001 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.693390 kubelet[2735]: E0620 19:18:39.693367 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.693390 kubelet[2735]: W0620 19:18:39.693384 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.693480 kubelet[2735]: E0620 19:18:39.693466 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.693634 kubelet[2735]: E0620 19:18:39.693611 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.693634 kubelet[2735]: W0620 19:18:39.693625 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.693713 kubelet[2735]: E0620 19:18:39.693666 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.693870 kubelet[2735]: E0620 19:18:39.693850 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.693870 kubelet[2735]: W0620 19:18:39.693863 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.693965 kubelet[2735]: E0620 19:18:39.693880 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.694279 kubelet[2735]: E0620 19:18:39.694248 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.694279 kubelet[2735]: W0620 19:18:39.694269 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.694512 kubelet[2735]: E0620 19:18:39.694300 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.694591 kubelet[2735]: E0620 19:18:39.694567 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.694591 kubelet[2735]: W0620 19:18:39.694586 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.694736 kubelet[2735]: E0620 19:18:39.694598 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.704542 kubelet[2735]: E0620 19:18:39.704507 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.704542 kubelet[2735]: W0620 19:18:39.704526 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.704542 kubelet[2735]: E0620 19:18:39.704543 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.704960 kubelet[2735]: E0620 19:18:39.704925 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:39.704960 kubelet[2735]: W0620 19:18:39.704953 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:39.705017 kubelet[2735]: E0620 19:18:39.704979 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:39.723819 containerd[1580]: time="2025-06-20T19:18:39.723760007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57qld,Uid:f5697ac3-7228-4536-8ff2-b01eb02a15ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\"" Jun 20 19:18:41.068634 kubelet[2735]: E0620 19:18:41.068566 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:43.068668 kubelet[2735]: E0620 19:18:43.068611 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:43.383526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393017983.mount: Deactivated successfully. Jun 20 19:18:45.068112 kubelet[2735]: E0620 19:18:45.068029 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:45.993793 containerd[1580]: time="2025-06-20T19:18:45.993706763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:45.995343 containerd[1580]: time="2025-06-20T19:18:45.995287541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:18:45.996947 containerd[1580]: time="2025-06-20T19:18:45.996906139Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:46.000347 containerd[1580]: time="2025-06-20T19:18:45.999476796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:46.000460 containerd[1580]: time="2025-06-20T19:18:46.000439234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 6.355464351s" Jun 20 19:18:46.000508 containerd[1580]: time="2025-06-20T19:18:46.000470983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:18:46.005299 containerd[1580]: time="2025-06-20T19:18:46.005209820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:18:46.013759 containerd[1580]: time="2025-06-20T19:18:46.013678078Z" level=info msg="CreateContainer within sandbox \"3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:18:46.030343 containerd[1580]: time="2025-06-20T19:18:46.029695669Z" level=info msg="Container 6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:46.042274 containerd[1580]: time="2025-06-20T19:18:46.042207729Z" level=info msg="CreateContainer within sandbox \"3eeeb9566d686fa9b59423460e57b8cb42c3e6e496c01f4356ec4f209ef302a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86\"" Jun 20 19:18:46.043021 containerd[1580]: time="2025-06-20T19:18:46.042895190Z" level=info msg="StartContainer for \"6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86\"" Jun 20 19:18:46.044360 containerd[1580]: time="2025-06-20T19:18:46.044300067Z" level=info msg="connecting to shim 6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86" address="unix:///run/containerd/s/b6bc143686205376ca613f5913b827f36f621264c4da76bce2ca8e794ef6c50f" protocol=ttrpc version=3 Jun 20 19:18:46.074518 systemd[1]: Started cri-containerd-6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86.scope - libcontainer container 6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86. Jun 20 19:18:46.235898 containerd[1580]: time="2025-06-20T19:18:46.235842946Z" level=info msg="StartContainer for \"6dd4342a789bce63c21d54a065e7fd39f58d9e18786f533e4a0fbf5cb7fd5d86\" returns successfully" Jun 20 19:18:47.068429 kubelet[2735]: E0620 19:18:47.068239 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:47.240810 kubelet[2735]: E0620 19:18:47.240759 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:47.246194 kubelet[2735]: E0620 19:18:47.246142 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.246194 kubelet[2735]: W0620 19:18:47.246173 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.246194 kubelet[2735]: E0620 19:18:47.246201 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.246614 kubelet[2735]: E0620 19:18:47.246597 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.246614 kubelet[2735]: W0620 19:18:47.246610 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.246683 kubelet[2735]: E0620 19:18:47.246620 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.246845 kubelet[2735]: E0620 19:18:47.246830 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.246845 kubelet[2735]: W0620 19:18:47.246842 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.246910 kubelet[2735]: E0620 19:18:47.246851 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.247109 kubelet[2735]: E0620 19:18:47.247074 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.247109 kubelet[2735]: W0620 19:18:47.247088 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.247109 kubelet[2735]: E0620 19:18:47.247097 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.247414 kubelet[2735]: E0620 19:18:47.247289 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.247414 kubelet[2735]: W0620 19:18:47.247298 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.247414 kubelet[2735]: E0620 19:18:47.247323 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.247529 kubelet[2735]: E0620 19:18:47.247513 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.247529 kubelet[2735]: W0620 19:18:47.247522 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.247529 kubelet[2735]: E0620 19:18:47.247531 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.247749 kubelet[2735]: E0620 19:18:47.247716 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.247749 kubelet[2735]: W0620 19:18:47.247728 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.247749 kubelet[2735]: E0620 19:18:47.247738 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.247947 kubelet[2735]: E0620 19:18:47.247926 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.247947 kubelet[2735]: W0620 19:18:47.247938 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.247947 kubelet[2735]: E0620 19:18:47.247947 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.248168 kubelet[2735]: E0620 19:18:47.248149 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.248168 kubelet[2735]: W0620 19:18:47.248162 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.248168 kubelet[2735]: E0620 19:18:47.248171 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.248527 kubelet[2735]: E0620 19:18:47.248493 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.248572 kubelet[2735]: W0620 19:18:47.248529 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.248572 kubelet[2735]: E0620 19:18:47.248564 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.248952 kubelet[2735]: E0620 19:18:47.248930 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.248991 kubelet[2735]: W0620 19:18:47.248950 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.248991 kubelet[2735]: E0620 19:18:47.248965 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.249212 kubelet[2735]: E0620 19:18:47.249193 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.249212 kubelet[2735]: W0620 19:18:47.249210 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.249272 kubelet[2735]: E0620 19:18:47.249223 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.249513 kubelet[2735]: E0620 19:18:47.249494 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.249513 kubelet[2735]: W0620 19:18:47.249511 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.249584 kubelet[2735]: E0620 19:18:47.249524 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.249810 kubelet[2735]: E0620 19:18:47.249779 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.249810 kubelet[2735]: W0620 19:18:47.249799 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.249880 kubelet[2735]: E0620 19:18:47.249814 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.250085 kubelet[2735]: E0620 19:18:47.250065 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.250085 kubelet[2735]: W0620 19:18:47.250081 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.250142 kubelet[2735]: E0620 19:18:47.250095 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.340391 kubelet[2735]: E0620 19:18:47.340047 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.340391 kubelet[2735]: W0620 19:18:47.340126 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.340391 kubelet[2735]: E0620 19:18:47.340151 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.340878 kubelet[2735]: E0620 19:18:47.340741 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.340878 kubelet[2735]: W0620 19:18:47.340755 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.340878 kubelet[2735]: E0620 19:18:47.340774 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.341270 kubelet[2735]: E0620 19:18:47.341117 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.341270 kubelet[2735]: W0620 19:18:47.341133 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.341270 kubelet[2735]: E0620 19:18:47.341154 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.341553 kubelet[2735]: E0620 19:18:47.341523 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.341553 kubelet[2735]: W0620 19:18:47.341540 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.341553 kubelet[2735]: E0620 19:18:47.341560 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.341890 kubelet[2735]: E0620 19:18:47.341761 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.341890 kubelet[2735]: W0620 19:18:47.341770 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.342123 kubelet[2735]: E0620 19:18:47.341952 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.342123 kubelet[2735]: W0620 19:18:47.341967 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.342123 kubelet[2735]: E0620 19:18:47.342009 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.342123 kubelet[2735]: E0620 19:18:47.342090 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.342430 kubelet[2735]: E0620 19:18:47.342143 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.342430 kubelet[2735]: W0620 19:18:47.342153 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.342430 kubelet[2735]: E0620 19:18:47.342164 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.342430 kubelet[2735]: E0620 19:18:47.342366 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.342430 kubelet[2735]: W0620 19:18:47.342375 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.342430 kubelet[2735]: E0620 19:18:47.342406 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.342672 kubelet[2735]: E0620 19:18:47.342631 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.342672 kubelet[2735]: W0620 19:18:47.342645 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.342672 kubelet[2735]: E0620 19:18:47.342660 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.343203 kubelet[2735]: E0620 19:18:47.343005 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.343203 kubelet[2735]: W0620 19:18:47.343023 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.343203 kubelet[2735]: E0620 19:18:47.343046 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.344238 kubelet[2735]: E0620 19:18:47.344210 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.344238 kubelet[2735]: W0620 19:18:47.344228 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.344335 kubelet[2735]: E0620 19:18:47.344245 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.344565 kubelet[2735]: E0620 19:18:47.344539 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.344565 kubelet[2735]: W0620 19:18:47.344553 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.344630 kubelet[2735]: E0620 19:18:47.344590 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.344806 kubelet[2735]: E0620 19:18:47.344783 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.344806 kubelet[2735]: W0620 19:18:47.344796 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.344858 kubelet[2735]: E0620 19:18:47.344821 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.344991 kubelet[2735]: E0620 19:18:47.344969 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.344991 kubelet[2735]: W0620 19:18:47.344981 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.345085 kubelet[2735]: E0620 19:18:47.345059 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.345177 kubelet[2735]: E0620 19:18:47.345161 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.345177 kubelet[2735]: W0620 19:18:47.345173 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.345231 kubelet[2735]: E0620 19:18:47.345190 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.345641 kubelet[2735]: E0620 19:18:47.345616 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.345692 kubelet[2735]: W0620 19:18:47.345648 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.345692 kubelet[2735]: E0620 19:18:47.345674 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.345918 kubelet[2735]: E0620 19:18:47.345902 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.345918 kubelet[2735]: W0620 19:18:47.345914 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.345973 kubelet[2735]: E0620 19:18:47.345924 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:47.346234 kubelet[2735]: E0620 19:18:47.346205 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:47.346234 kubelet[2735]: W0620 19:18:47.346223 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:47.346297 kubelet[2735]: E0620 19:18:47.346239 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.241479 kubelet[2735]: I0620 19:18:48.241442 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:18:48.241923 kubelet[2735]: E0620 19:18:48.241754 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:48.259112 kubelet[2735]: E0620 19:18:48.259077 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.259112 kubelet[2735]: W0620 19:18:48.259101 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.259112 kubelet[2735]: E0620 19:18:48.259123 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.259425 kubelet[2735]: E0620 19:18:48.259401 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.259476 kubelet[2735]: W0620 19:18:48.259428 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.259476 kubelet[2735]: E0620 19:18:48.259441 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.259725 kubelet[2735]: E0620 19:18:48.259692 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.259725 kubelet[2735]: W0620 19:18:48.259707 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.259725 kubelet[2735]: E0620 19:18:48.259721 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.259969 kubelet[2735]: E0620 19:18:48.259947 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.259969 kubelet[2735]: W0620 19:18:48.259963 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.260085 kubelet[2735]: E0620 19:18:48.259975 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.260190 kubelet[2735]: E0620 19:18:48.260167 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.260190 kubelet[2735]: W0620 19:18:48.260180 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.260264 kubelet[2735]: E0620 19:18:48.260191 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.260397 kubelet[2735]: E0620 19:18:48.260373 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.260397 kubelet[2735]: W0620 19:18:48.260386 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.260397 kubelet[2735]: E0620 19:18:48.260396 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.260592 kubelet[2735]: E0620 19:18:48.260570 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.260592 kubelet[2735]: W0620 19:18:48.260581 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.260592 kubelet[2735]: E0620 19:18:48.260588 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.260783 kubelet[2735]: E0620 19:18:48.260767 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.260783 kubelet[2735]: W0620 19:18:48.260778 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.260858 kubelet[2735]: E0620 19:18:48.260786 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.260980 kubelet[2735]: E0620 19:18:48.260964 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.260980 kubelet[2735]: W0620 19:18:48.260975 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.261053 kubelet[2735]: E0620 19:18:48.260985 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.261164 kubelet[2735]: E0620 19:18:48.261148 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.261164 kubelet[2735]: W0620 19:18:48.261159 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.261223 kubelet[2735]: E0620 19:18:48.261169 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.261374 kubelet[2735]: E0620 19:18:48.261359 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.261374 kubelet[2735]: W0620 19:18:48.261371 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.261454 kubelet[2735]: E0620 19:18:48.261382 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.261564 kubelet[2735]: E0620 19:18:48.261548 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.261564 kubelet[2735]: W0620 19:18:48.261559 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.261624 kubelet[2735]: E0620 19:18:48.261568 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.261779 kubelet[2735]: E0620 19:18:48.261754 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.261779 kubelet[2735]: W0620 19:18:48.261767 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.261779 kubelet[2735]: E0620 19:18:48.261777 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.261959 kubelet[2735]: E0620 19:18:48.261942 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.261959 kubelet[2735]: W0620 19:18:48.261953 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.262025 kubelet[2735]: E0620 19:18:48.261962 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.262141 kubelet[2735]: E0620 19:18:48.262125 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.262141 kubelet[2735]: W0620 19:18:48.262136 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.262210 kubelet[2735]: E0620 19:18:48.262145 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.349397 kubelet[2735]: E0620 19:18:48.349357 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.349397 kubelet[2735]: W0620 19:18:48.349379 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.349397 kubelet[2735]: E0620 19:18:48.349399 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.349647 kubelet[2735]: E0620 19:18:48.349629 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.349647 kubelet[2735]: W0620 19:18:48.349644 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.349740 kubelet[2735]: E0620 19:18:48.349659 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.349879 kubelet[2735]: E0620 19:18:48.349860 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.349879 kubelet[2735]: W0620 19:18:48.349872 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.349954 kubelet[2735]: E0620 19:18:48.349886 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.350097 kubelet[2735]: E0620 19:18:48.350080 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.350097 kubelet[2735]: W0620 19:18:48.350094 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.350201 kubelet[2735]: E0620 19:18:48.350113 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.350503 kubelet[2735]: E0620 19:18:48.350448 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.350503 kubelet[2735]: W0620 19:18:48.350488 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.350588 kubelet[2735]: E0620 19:18:48.350520 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.350767 kubelet[2735]: E0620 19:18:48.350744 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.350767 kubelet[2735]: W0620 19:18:48.350756 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.350843 kubelet[2735]: E0620 19:18:48.350771 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.351022 kubelet[2735]: E0620 19:18:48.351001 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.351022 kubelet[2735]: W0620 19:18:48.351014 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.351111 kubelet[2735]: E0620 19:18:48.351050 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.351222 kubelet[2735]: E0620 19:18:48.351204 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.351222 kubelet[2735]: W0620 19:18:48.351215 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.351411 kubelet[2735]: E0620 19:18:48.351240 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.351411 kubelet[2735]: E0620 19:18:48.351410 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.351475 kubelet[2735]: W0620 19:18:48.351418 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.351475 kubelet[2735]: E0620 19:18:48.351446 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.351631 kubelet[2735]: E0620 19:18:48.351613 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.351631 kubelet[2735]: W0620 19:18:48.351623 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.351721 kubelet[2735]: E0620 19:18:48.351637 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.351854 kubelet[2735]: E0620 19:18:48.351833 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.351854 kubelet[2735]: W0620 19:18:48.351848 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.351962 kubelet[2735]: E0620 19:18:48.351866 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.352070 kubelet[2735]: E0620 19:18:48.352048 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.352070 kubelet[2735]: W0620 19:18:48.352060 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.352175 kubelet[2735]: E0620 19:18:48.352078 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.352320 kubelet[2735]: E0620 19:18:48.352288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.352320 kubelet[2735]: W0620 19:18:48.352300 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.352407 kubelet[2735]: E0620 19:18:48.352337 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.352587 kubelet[2735]: E0620 19:18:48.352567 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.352587 kubelet[2735]: W0620 19:18:48.352579 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.352682 kubelet[2735]: E0620 19:18:48.352594 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.352820 kubelet[2735]: E0620 19:18:48.352797 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.352820 kubelet[2735]: W0620 19:18:48.352815 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.352920 kubelet[2735]: E0620 19:18:48.352830 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.353062 kubelet[2735]: E0620 19:18:48.353047 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.353062 kubelet[2735]: W0620 19:18:48.353058 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.353131 kubelet[2735]: E0620 19:18:48.353085 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.353407 kubelet[2735]: E0620 19:18:48.353389 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.353407 kubelet[2735]: W0620 19:18:48.353405 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.353486 kubelet[2735]: E0620 19:18:48.353424 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.353668 kubelet[2735]: E0620 19:18:48.353655 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:18:48.353701 kubelet[2735]: W0620 19:18:48.353667 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:18:48.353701 kubelet[2735]: E0620 19:18:48.353688 2735 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:18:48.705462 containerd[1580]: time="2025-06-20T19:18:48.705388468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:48.718079 containerd[1580]: time="2025-06-20T19:18:48.718027645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:18:48.741144 containerd[1580]: time="2025-06-20T19:18:48.741097040Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:48.754187 containerd[1580]: time="2025-06-20T19:18:48.754152417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:48.754949 containerd[1580]: time="2025-06-20T19:18:48.754903978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 2.749611383s" Jun 20 19:18:48.755018 containerd[1580]: time="2025-06-20T19:18:48.754944564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:18:48.757162 containerd[1580]: time="2025-06-20T19:18:48.757122202Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:18:48.858956 containerd[1580]: time="2025-06-20T19:18:48.858891031Z" level=info msg="Container 39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:48.891808 containerd[1580]: time="2025-06-20T19:18:48.891744562Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\"" Jun 20 19:18:48.892336 containerd[1580]: time="2025-06-20T19:18:48.892280207Z" level=info msg="StartContainer for \"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\"" Jun 20 19:18:48.894155 containerd[1580]: time="2025-06-20T19:18:48.894109111Z" level=info msg="connecting to shim 39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04" address="unix:///run/containerd/s/7297845e9420f9a9e9b9eee62b65b76efb181fd02be86fd9b2ecd266e60632cf" protocol=ttrpc version=3 Jun 20 19:18:48.922489 systemd[1]: Started cri-containerd-39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04.scope - libcontainer container 39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04. Jun 20 19:18:48.976807 systemd[1]: cri-containerd-39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04.scope: Deactivated successfully. Jun 20 19:18:48.978943 containerd[1580]: time="2025-06-20T19:18:48.978901502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\" id:\"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\" pid:3440 exited_at:{seconds:1750447128 nanos:978304902}" Jun 20 19:18:49.068709 kubelet[2735]: E0620 19:18:49.068631 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:49.172792 containerd[1580]: time="2025-06-20T19:18:49.172713194Z" level=info msg="received exit event container_id:\"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\" id:\"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\" pid:3440 exited_at:{seconds:1750447128 nanos:978304902}" Jun 20 19:18:49.175245 containerd[1580]: time="2025-06-20T19:18:49.175207247Z" level=info msg="StartContainer for \"39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04\" returns successfully" Jun 20 19:18:49.203131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39f7f8eeced8164c88495a37e2e827d8a37fd2c606eab56eee5a9f986af55a04-rootfs.mount: Deactivated successfully. Jun 20 19:18:49.381150 kubelet[2735]: I0620 19:18:49.381056 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ddfcd6794-xgckf" podStartSLOduration=4.0238065689999996 podStartE2EDuration="10.381035825s" podCreationTimestamp="2025-06-20 19:18:39 +0000 UTC" firstStartedPulling="2025-06-20 19:18:39.644288723 +0000 UTC m=+21.674570753" lastFinishedPulling="2025-06-20 19:18:46.001517979 +0000 UTC m=+28.031800009" observedRunningTime="2025-06-20 19:18:47.265561222 +0000 UTC m=+29.295843252" watchObservedRunningTime="2025-06-20 19:18:49.381035825 +0000 UTC m=+31.411317855" Jun 20 19:18:50.254813 containerd[1580]: time="2025-06-20T19:18:50.254290268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:18:51.068519 kubelet[2735]: E0620 19:18:51.068426 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:51.553047 kubelet[2735]: I0620 19:18:51.552976 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:18:51.553521 kubelet[2735]: E0620 19:18:51.553487 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:52.257218 kubelet[2735]: E0620 19:18:52.257146 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:18:53.068876 kubelet[2735]: E0620 19:18:53.068473 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:55.070155 kubelet[2735]: E0620 19:18:55.070055 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:55.344183 containerd[1580]: time="2025-06-20T19:18:55.344025152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:55.412172 containerd[1580]: time="2025-06-20T19:18:55.412100777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:18:55.441854 containerd[1580]: time="2025-06-20T19:18:55.441772639Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:55.476151 containerd[1580]: time="2025-06-20T19:18:55.476091634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:18:55.476933 containerd[1580]: time="2025-06-20T19:18:55.476897326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 5.222523382s" Jun 20 19:18:55.476988 containerd[1580]: time="2025-06-20T19:18:55.476935718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:18:55.478999 containerd[1580]: time="2025-06-20T19:18:55.478967682Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:18:55.740089 containerd[1580]: time="2025-06-20T19:18:55.739937139Z" level=info msg="Container 378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:18:55.810514 containerd[1580]: time="2025-06-20T19:18:55.810431973Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\"" Jun 20 19:18:55.811224 containerd[1580]: time="2025-06-20T19:18:55.810869634Z" level=info msg="StartContainer for \"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\"" Jun 20 19:18:55.812753 containerd[1580]: time="2025-06-20T19:18:55.812710189Z" level=info msg="connecting to shim 378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88" address="unix:///run/containerd/s/7297845e9420f9a9e9b9eee62b65b76efb181fd02be86fd9b2ecd266e60632cf" protocol=ttrpc version=3 Jun 20 19:18:55.841651 systemd[1]: Started cri-containerd-378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88.scope - libcontainer container 378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88. Jun 20 19:18:55.907739 containerd[1580]: time="2025-06-20T19:18:55.907683651Z" level=info msg="StartContainer for \"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\" returns successfully" Jun 20 19:18:57.068628 kubelet[2735]: E0620 19:18:57.068568 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:59.068671 kubelet[2735]: E0620 19:18:59.068577 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:18:59.292816 containerd[1580]: time="2025-06-20T19:18:59.292743335Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:18:59.295335 systemd[1]: cri-containerd-378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88.scope: Deactivated successfully. Jun 20 19:18:59.295689 systemd[1]: cri-containerd-378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88.scope: Consumed 588ms CPU time, 181.9M memory peak, 4.5M read from disk, 171.2M written to disk. Jun 20 19:18:59.296520 containerd[1580]: time="2025-06-20T19:18:59.296461407Z" level=info msg="received exit event container_id:\"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\" id:\"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\" pid:3504 exited_at:{seconds:1750447139 nanos:296060141}" Jun 20 19:18:59.296621 containerd[1580]: time="2025-06-20T19:18:59.296471217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\" id:\"378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88\" pid:3504 exited_at:{seconds:1750447139 nanos:296060141}" Jun 20 19:18:59.307554 kubelet[2735]: I0620 19:18:59.307507 2735 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:18:59.322373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-378049b7b042c1494fee07c973460683ae7df4fb08d9506e8e6882ebeca90d88-rootfs.mount: Deactivated successfully. Jun 20 19:18:59.844184 systemd[1]: Created slice kubepods-besteffort-pod63e4e75a_a49f_4727_a6dc_2d7c2d187722.slice - libcontainer container kubepods-besteffort-pod63e4e75a_a49f_4727_a6dc_2d7c2d187722.slice. Jun 20 19:18:59.927621 kubelet[2735]: I0620 19:18:59.927539 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c8gt\" (UniqueName: \"kubernetes.io/projected/63e4e75a-a49f-4727-a6dc-2d7c2d187722-kube-api-access-2c8gt\") pod \"calico-kube-controllers-58866ffd4c-nxr8s\" (UID: \"63e4e75a-a49f-4727-a6dc-2d7c2d187722\") " pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" Jun 20 19:18:59.927621 kubelet[2735]: I0620 19:18:59.927602 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63e4e75a-a49f-4727-a6dc-2d7c2d187722-tigera-ca-bundle\") pod \"calico-kube-controllers-58866ffd4c-nxr8s\" (UID: \"63e4e75a-a49f-4727-a6dc-2d7c2d187722\") " pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" Jun 20 19:19:00.119889 systemd[1]: Created slice kubepods-burstable-podc7d7f4ed_f18b_44e9_aa65_bb4db200fe3c.slice - libcontainer container kubepods-burstable-podc7d7f4ed_f18b_44e9_aa65_bb4db200fe3c.slice. Jun 20 19:19:00.127155 systemd[1]: Created slice kubepods-besteffort-podea3da14f_e857_457d_b1b7_a4caf7621c08.slice - libcontainer container kubepods-besteffort-podea3da14f_e857_457d_b1b7_a4caf7621c08.slice. Jun 20 19:19:00.129350 kubelet[2735]: I0620 19:19:00.129167 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qs6b\" (UniqueName: \"kubernetes.io/projected/c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c-kube-api-access-4qs6b\") pod \"coredns-7c65d6cfc9-hdmdr\" (UID: \"c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c\") " pod="kube-system/coredns-7c65d6cfc9-hdmdr" Jun 20 19:19:00.129350 kubelet[2735]: I0620 19:19:00.129291 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c-config-volume\") pod \"coredns-7c65d6cfc9-hdmdr\" (UID: \"c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c\") " pod="kube-system/coredns-7c65d6cfc9-hdmdr" Jun 20 19:19:00.133876 systemd[1]: Created slice kubepods-besteffort-pod81b12d88_c6b2_47cb_a67c_8cbc122dfaf9.slice - libcontainer container kubepods-besteffort-pod81b12d88_c6b2_47cb_a67c_8cbc122dfaf9.slice. Jun 20 19:19:00.138489 systemd[1]: Created slice kubepods-besteffort-pod031ba079_1aa1_4e85_90ea_f180e62009e8.slice - libcontainer container kubepods-besteffort-pod031ba079_1aa1_4e85_90ea_f180e62009e8.slice. Jun 20 19:19:00.151547 systemd[1]: Created slice kubepods-besteffort-podd120af62_419d_4085_83c3_a999c759d842.slice - libcontainer container kubepods-besteffort-podd120af62_419d_4085_83c3_a999c759d842.slice. Jun 20 19:19:00.158192 systemd[1]: Created slice kubepods-besteffort-podb96f406f_076f_4859_a5f7_9af8e0765f82.slice - libcontainer container kubepods-besteffort-podb96f406f_076f_4859_a5f7_9af8e0765f82.slice. Jun 20 19:19:00.165188 systemd[1]: Created slice kubepods-burstable-poda5f547d6_4a58_43de_8d2b_04d7e42e1086.slice - libcontainer container kubepods-burstable-poda5f547d6_4a58_43de_8d2b_04d7e42e1086.slice. Jun 20 19:19:00.229863 kubelet[2735]: I0620 19:19:00.229798 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea3da14f-e857-457d-b1b7-a4caf7621c08-calico-apiserver-certs\") pod \"calico-apiserver-699c44cbf4-xj2bn\" (UID: \"ea3da14f-e857-457d-b1b7-a4caf7621c08\") " pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" Jun 20 19:19:00.229863 kubelet[2735]: I0620 19:19:00.229839 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-ca-bundle\") pod \"whisker-54d8f86898-ftrcc\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:00.229863 kubelet[2735]: I0620 19:19:00.229859 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqjt\" (UniqueName: \"kubernetes.io/projected/a5f547d6-4a58-43de-8d2b-04d7e42e1086-kube-api-access-lsqjt\") pod \"coredns-7c65d6cfc9-dmrmr\" (UID: \"a5f547d6-4a58-43de-8d2b-04d7e42e1086\") " pod="kube-system/coredns-7c65d6cfc9-dmrmr" Jun 20 19:19:00.230125 kubelet[2735]: I0620 19:19:00.229883 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/031ba079-1aa1-4e85-90ea-f180e62009e8-calico-apiserver-certs\") pod \"calico-apiserver-699c44cbf4-2kwq5\" (UID: \"031ba079-1aa1-4e85-90ea-f180e62009e8\") " pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" Jun 20 19:19:00.230125 kubelet[2735]: I0620 19:19:00.229905 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtdr\" (UniqueName: \"kubernetes.io/projected/d120af62-419d-4085-83c3-a999c759d842-kube-api-access-tgtdr\") pod \"goldmane-dc7b455cb-p79lm\" (UID: \"d120af62-419d-4085-83c3-a999c759d842\") " pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:00.230125 kubelet[2735]: I0620 19:19:00.229927 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d120af62-419d-4085-83c3-a999c759d842-goldmane-ca-bundle\") pod \"goldmane-dc7b455cb-p79lm\" (UID: \"d120af62-419d-4085-83c3-a999c759d842\") " pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:00.230125 kubelet[2735]: I0620 19:19:00.229946 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d120af62-419d-4085-83c3-a999c759d842-goldmane-key-pair\") pod \"goldmane-dc7b455cb-p79lm\" (UID: \"d120af62-419d-4085-83c3-a999c759d842\") " pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:00.230125 kubelet[2735]: I0620 19:19:00.230034 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5f547d6-4a58-43de-8d2b-04d7e42e1086-config-volume\") pod \"coredns-7c65d6cfc9-dmrmr\" (UID: \"a5f547d6-4a58-43de-8d2b-04d7e42e1086\") " pod="kube-system/coredns-7c65d6cfc9-dmrmr" Jun 20 19:19:00.230284 kubelet[2735]: I0620 19:19:00.230070 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d120af62-419d-4085-83c3-a999c759d842-config\") pod \"goldmane-dc7b455cb-p79lm\" (UID: \"d120af62-419d-4085-83c3-a999c759d842\") " pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:00.230284 kubelet[2735]: I0620 19:19:00.230110 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2hm\" (UniqueName: \"kubernetes.io/projected/b96f406f-076f-4859-a5f7-9af8e0765f82-kube-api-access-sj2hm\") pod \"whisker-54d8f86898-ftrcc\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:00.230284 kubelet[2735]: I0620 19:19:00.230139 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/81b12d88-c6b2-47cb-a67c-8cbc122dfaf9-calico-apiserver-certs\") pod \"calico-apiserver-847d7f87d-5sjqb\" (UID: \"81b12d88-c6b2-47cb-a67c-8cbc122dfaf9\") " pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" Jun 20 19:19:00.230284 kubelet[2735]: I0620 19:19:00.230172 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8zv\" (UniqueName: \"kubernetes.io/projected/81b12d88-c6b2-47cb-a67c-8cbc122dfaf9-kube-api-access-5c8zv\") pod \"calico-apiserver-847d7f87d-5sjqb\" (UID: \"81b12d88-c6b2-47cb-a67c-8cbc122dfaf9\") " pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" Jun 20 19:19:00.230284 kubelet[2735]: I0620 19:19:00.230192 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-backend-key-pair\") pod \"whisker-54d8f86898-ftrcc\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:00.230477 kubelet[2735]: I0620 19:19:00.230256 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzng\" (UniqueName: \"kubernetes.io/projected/031ba079-1aa1-4e85-90ea-f180e62009e8-kube-api-access-dlzng\") pod \"calico-apiserver-699c44cbf4-2kwq5\" (UID: \"031ba079-1aa1-4e85-90ea-f180e62009e8\") " pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" Jun 20 19:19:00.230477 kubelet[2735]: I0620 19:19:00.230283 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrz9\" (UniqueName: \"kubernetes.io/projected/ea3da14f-e857-457d-b1b7-a4caf7621c08-kube-api-access-mfrz9\") pod \"calico-apiserver-699c44cbf4-xj2bn\" (UID: \"ea3da14f-e857-457d-b1b7-a4caf7621c08\") " pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" Jun 20 19:19:00.423537 kubelet[2735]: E0620 19:19:00.423340 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:00.424038 containerd[1580]: time="2025-06-20T19:19:00.423935655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdmdr,Uid:c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c,Namespace:kube-system,Attempt:0,}" Jun 20 19:19:00.447187 containerd[1580]: time="2025-06-20T19:19:00.447117875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58866ffd4c-nxr8s,Uid:63e4e75a-a49f-4727-a6dc-2d7c2d187722,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:00.581708 containerd[1580]: time="2025-06-20T19:19:00.581616846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:19:00.657029 containerd[1580]: time="2025-06-20T19:19:00.656964517Z" level=error msg="Failed to destroy network for sandbox \"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.659011 containerd[1580]: time="2025-06-20T19:19:00.658966552Z" level=error msg="Failed to destroy network for sandbox \"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.712690 containerd[1580]: time="2025-06-20T19:19:00.712261257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58866ffd4c-nxr8s,Uid:63e4e75a-a49f-4727-a6dc-2d7c2d187722,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.712690 containerd[1580]: time="2025-06-20T19:19:00.712286406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdmdr,Uid:c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.723995 kubelet[2735]: E0620 19:19:00.723920 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.724228 kubelet[2735]: E0620 19:19:00.724026 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" Jun 20 19:19:00.724228 kubelet[2735]: E0620 19:19:00.724053 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" Jun 20 19:19:00.724289 kubelet[2735]: E0620 19:19:00.724129 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58866ffd4c-nxr8s_calico-system(63e4e75a-a49f-4727-a6dc-2d7c2d187722)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58866ffd4c-nxr8s_calico-system(63e4e75a-a49f-4727-a6dc-2d7c2d187722)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"116721829247275e6de5405d5f958fee5c6bea66b1a1413887e03540704199db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" podUID="63e4e75a-a49f-4727-a6dc-2d7c2d187722" Jun 20 19:19:00.724289 kubelet[2735]: E0620 19:19:00.724261 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:00.724425 kubelet[2735]: E0620 19:19:00.724329 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hdmdr" Jun 20 19:19:00.724425 kubelet[2735]: E0620 19:19:00.724350 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hdmdr" Jun 20 19:19:00.724425 kubelet[2735]: E0620 19:19:00.724404 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hdmdr_kube-system(c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hdmdr_kube-system(c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"250c2d419deafb6a98ac5dfae07cd88b9a771514c0a00dc73eea53cd203f3a14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hdmdr" podUID="c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c" Jun 20 19:19:00.730347 containerd[1580]: time="2025-06-20T19:19:00.730268852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-xj2bn,Uid:ea3da14f-e857-457d-b1b7-a4caf7621c08,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:00.737225 containerd[1580]: time="2025-06-20T19:19:00.737186301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-5sjqb,Uid:81b12d88-c6b2-47cb-a67c-8cbc122dfaf9,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:00.749053 containerd[1580]: time="2025-06-20T19:19:00.749013022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-2kwq5,Uid:031ba079-1aa1-4e85-90ea-f180e62009e8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:00.755945 containerd[1580]: time="2025-06-20T19:19:00.755890886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:00.762590 containerd[1580]: time="2025-06-20T19:19:00.762563170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8f86898-ftrcc,Uid:b96f406f-076f-4859-a5f7-9af8e0765f82,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:00.767866 kubelet[2735]: E0620 19:19:00.767806 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:00.768156 containerd[1580]: time="2025-06-20T19:19:00.768126919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmrmr,Uid:a5f547d6-4a58-43de-8d2b-04d7e42e1086,Namespace:kube-system,Attempt:0,}" Jun 20 19:19:00.843029 containerd[1580]: time="2025-06-20T19:19:00.842962007Z" level=error msg="Failed to destroy network for sandbox \"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.001587 containerd[1580]: time="2025-06-20T19:19:01.001414872Z" level=error msg="Failed to destroy network for sandbox \"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.004602 containerd[1580]: time="2025-06-20T19:19:01.004568821Z" level=error msg="Failed to destroy network for sandbox \"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.073928 systemd[1]: Created slice kubepods-besteffort-pod98422ba0_fce0_437e_87ec_c2741bdfac3e.slice - libcontainer container kubepods-besteffort-pod98422ba0_fce0_437e_87ec_c2741bdfac3e.slice. Jun 20 19:19:01.076278 containerd[1580]: time="2025-06-20T19:19:01.076244015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:01.147792 containerd[1580]: time="2025-06-20T19:19:01.147721525Z" level=error msg="Failed to destroy network for sandbox \"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.280049 containerd[1580]: time="2025-06-20T19:19:01.279957028Z" level=error msg="Failed to destroy network for sandbox \"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.339337 containerd[1580]: time="2025-06-20T19:19:01.338824652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-xj2bn,Uid:ea3da14f-e857-457d-b1b7-a4caf7621c08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.339540 kubelet[2735]: E0620 19:19:01.339078 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.339540 kubelet[2735]: E0620 19:19:01.339168 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" Jun 20 19:19:01.339540 kubelet[2735]: E0620 19:19:01.339191 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" Jun 20 19:19:01.339928 kubelet[2735]: E0620 19:19:01.339242 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699c44cbf4-xj2bn_calico-apiserver(ea3da14f-e857-457d-b1b7-a4caf7621c08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699c44cbf4-xj2bn_calico-apiserver(ea3da14f-e857-457d-b1b7-a4caf7621c08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a218ce1b314f3415edc5f3e86aa8d7c45fde7714eea9ac91e41780ae403ceda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" podUID="ea3da14f-e857-457d-b1b7-a4caf7621c08" Jun 20 19:19:01.341789 systemd[1]: run-netns-cni\x2d0c40c479\x2de1e4\x2d9404\x2dd20d\x2d531b844a1596.mount: Deactivated successfully. Jun 20 19:19:01.428041 containerd[1580]: time="2025-06-20T19:19:01.427971004Z" level=error msg="Failed to destroy network for sandbox \"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.430557 systemd[1]: run-netns-cni\x2d44b2c8c3\x2daec8\x2d98d3\x2dfbfd\x2d165bfb04dac8.mount: Deactivated successfully. Jun 20 19:19:01.469632 containerd[1580]: time="2025-06-20T19:19:01.469521899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-5sjqb,Uid:81b12d88-c6b2-47cb-a67c-8cbc122dfaf9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.469979 kubelet[2735]: E0620 19:19:01.469878 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.469979 kubelet[2735]: E0620 19:19:01.469975 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" Jun 20 19:19:01.469979 kubelet[2735]: E0620 19:19:01.469994 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" Jun 20 19:19:01.470286 kubelet[2735]: E0620 19:19:01.470044 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847d7f87d-5sjqb_calico-apiserver(81b12d88-c6b2-47cb-a67c-8cbc122dfaf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847d7f87d-5sjqb_calico-apiserver(81b12d88-c6b2-47cb-a67c-8cbc122dfaf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7227b1f97997c35ae7e36672fc1a24116486b79082f406856a0ada93213e15a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" podUID="81b12d88-c6b2-47cb-a67c-8cbc122dfaf9" Jun 20 19:19:01.496302 containerd[1580]: time="2025-06-20T19:19:01.496212946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-2kwq5,Uid:031ba079-1aa1-4e85-90ea-f180e62009e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.496650 kubelet[2735]: E0620 19:19:01.496598 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.496729 kubelet[2735]: E0620 19:19:01.496664 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" Jun 20 19:19:01.496729 kubelet[2735]: E0620 19:19:01.496685 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" Jun 20 19:19:01.496792 kubelet[2735]: E0620 19:19:01.496732 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699c44cbf4-2kwq5_calico-apiserver(031ba079-1aa1-4e85-90ea-f180e62009e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699c44cbf4-2kwq5_calico-apiserver(031ba079-1aa1-4e85-90ea-f180e62009e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43d1b28d965cd5db23e7a14fef9321625e3aa387ab4e53662cc073e200b88b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" podUID="031ba079-1aa1-4e85-90ea-f180e62009e8" Jun 20 19:19:01.753072 containerd[1580]: time="2025-06-20T19:19:01.752874838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.753307 kubelet[2735]: E0620 19:19:01.753212 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.753307 kubelet[2735]: E0620 19:19:01.753296 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:01.753430 kubelet[2735]: E0620 19:19:01.753340 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:01.753430 kubelet[2735]: E0620 19:19:01.753414 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-p79lm_calico-system(d120af62-419d-4085-83c3-a999c759d842)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-p79lm_calico-system(d120af62-419d-4085-83c3-a999c759d842)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46e94bb1e7f64403b10e38b2f270d4c3ae5cabe2cfe245bb7462e0e262fbe750\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-p79lm" podUID="d120af62-419d-4085-83c3-a999c759d842" Jun 20 19:19:01.760047 containerd[1580]: time="2025-06-20T19:19:01.759951564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8f86898-ftrcc,Uid:b96f406f-076f-4859-a5f7-9af8e0765f82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.760298 kubelet[2735]: E0620 19:19:01.760261 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.760391 kubelet[2735]: E0620 19:19:01.760351 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:01.760391 kubelet[2735]: E0620 19:19:01.760372 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:01.760470 kubelet[2735]: E0620 19:19:01.760417 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54d8f86898-ftrcc_calico-system(b96f406f-076f-4859-a5f7-9af8e0765f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54d8f86898-ftrcc_calico-system(b96f406f-076f-4859-a5f7-9af8e0765f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f861e13b9a215f5e2a7914123cc5c7481cf9dbce307aacfecc8e9e68dcd82584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54d8f86898-ftrcc" podUID="b96f406f-076f-4859-a5f7-9af8e0765f82" Jun 20 19:19:01.776868 containerd[1580]: time="2025-06-20T19:19:01.776760070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmrmr,Uid:a5f547d6-4a58-43de-8d2b-04d7e42e1086,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.777131 kubelet[2735]: E0620 19:19:01.777035 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.777131 kubelet[2735]: E0620 19:19:01.777097 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dmrmr" Jun 20 19:19:01.777131 kubelet[2735]: E0620 19:19:01.777123 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dmrmr" Jun 20 19:19:01.777282 kubelet[2735]: E0620 19:19:01.777168 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dmrmr_kube-system(a5f547d6-4a58-43de-8d2b-04d7e42e1086)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dmrmr_kube-system(a5f547d6-4a58-43de-8d2b-04d7e42e1086)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3982accc98e1ca094733042cd19c35bef97c171308aa1d1a9751138fb1d92a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dmrmr" podUID="a5f547d6-4a58-43de-8d2b-04d7e42e1086" Jun 20 19:19:01.836960 containerd[1580]: time="2025-06-20T19:19:01.836878143Z" level=error msg="Failed to destroy network for sandbox \"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.839560 systemd[1]: run-netns-cni\x2da3ca819e\x2d5286\x2dfe6a\x2d3009\x2d0a9f272a4ea6.mount: Deactivated successfully. Jun 20 19:19:01.842244 containerd[1580]: time="2025-06-20T19:19:01.842177862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.842555 kubelet[2735]: E0620 19:19:01.842492 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:01.842644 kubelet[2735]: E0620 19:19:01.842579 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:19:01.842644 kubelet[2735]: E0620 19:19:01.842608 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:19:01.842736 kubelet[2735]: E0620 19:19:01.842670 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6jjgb_calico-system(98422ba0-fce0-437e-87ec-c2741bdfac3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6jjgb_calico-system(98422ba0-fce0-437e-87ec-c2741bdfac3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602f613276474cbe37b788cb527eac9185d168c6dd32865602cb9ef51faebe19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:19:10.164310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217240665.mount: Deactivated successfully. Jun 20 19:19:11.955498 containerd[1580]: time="2025-06-20T19:19:11.954671645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:11.958050 containerd[1580]: time="2025-06-20T19:19:11.957977842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:19:11.980403 containerd[1580]: time="2025-06-20T19:19:11.980332853Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:12.064848 containerd[1580]: time="2025-06-20T19:19:12.064717973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:12.065498 containerd[1580]: time="2025-06-20T19:19:12.065445930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 11.483782103s" Jun 20 19:19:12.065498 containerd[1580]: time="2025-06-20T19:19:12.065498911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:19:12.081690 containerd[1580]: time="2025-06-20T19:19:12.081627118Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:19:12.434473 containerd[1580]: time="2025-06-20T19:19:12.434404608Z" level=info msg="Container c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:12.701463 containerd[1580]: time="2025-06-20T19:19:12.701257687Z" level=info msg="CreateContainer within sandbox \"b76c94bb59e4c7615e9b4f300e5e22aad56bb34f1ace110ae7b87b352b5b4a63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\"" Jun 20 19:19:12.703197 containerd[1580]: time="2025-06-20T19:19:12.703148616Z" level=info msg="StartContainer for \"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\"" Jun 20 19:19:12.705818 containerd[1580]: time="2025-06-20T19:19:12.705777971Z" level=info msg="connecting to shim c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82" address="unix:///run/containerd/s/7297845e9420f9a9e9b9eee62b65b76efb181fd02be86fd9b2ecd266e60632cf" protocol=ttrpc version=3 Jun 20 19:19:12.737077 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:49672.service - OpenSSH per-connection server daemon (10.0.0.1:49672). Jun 20 19:19:12.756572 systemd[1]: Started cri-containerd-c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82.scope - libcontainer container c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82. Jun 20 19:19:12.809917 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 49672 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:12.812196 sshd-session[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:12.818643 systemd-logind[1515]: New session 8 of user core. Jun 20 19:19:12.825662 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:19:13.585258 containerd[1580]: time="2025-06-20T19:19:13.585163371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:13.588295 containerd[1580]: time="2025-06-20T19:19:13.588251893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:13.588432 containerd[1580]: time="2025-06-20T19:19:13.588407272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8f86898-ftrcc,Uid:b96f406f-076f-4859-a5f7-9af8e0765f82,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:13.608114 containerd[1580]: time="2025-06-20T19:19:13.607758070Z" level=info msg="StartContainer for \"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" returns successfully" Jun 20 19:19:13.646197 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:19:13.646349 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:19:13.798070 containerd[1580]: time="2025-06-20T19:19:13.797993553Z" level=error msg="Failed to destroy network for sandbox \"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.801228 systemd[1]: run-netns-cni\x2ddda247cb\x2de152\x2d5619\x2d9d62\x2db9c7871310b4.mount: Deactivated successfully. Jun 20 19:19:13.801631 containerd[1580]: time="2025-06-20T19:19:13.801504916Z" level=error msg="Failed to destroy network for sandbox \"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.806360 sshd[3883]: Connection closed by 10.0.0.1 port 49672 Jun 20 19:19:13.806625 systemd[1]: run-netns-cni\x2d3a937853\x2d9143\x2d4f63\x2d41d9\x2d6885b06f0b14.mount: Deactivated successfully. Jun 20 19:19:13.807351 sshd-session[3861]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:13.813927 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:49672.service: Deactivated successfully. Jun 20 19:19:13.817051 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:19:13.819460 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:19:13.823799 systemd-logind[1515]: Removed session 8. Jun 20 19:19:13.829120 containerd[1580]: time="2025-06-20T19:19:13.829053013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.829468 kubelet[2735]: E0620 19:19:13.829397 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.829939 kubelet[2735]: E0620 19:19:13.829525 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:13.829939 kubelet[2735]: E0620 19:19:13.829547 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-p79lm" Jun 20 19:19:13.829939 kubelet[2735]: E0620 19:19:13.829613 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-p79lm_calico-system(d120af62-419d-4085-83c3-a999c759d842)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-p79lm_calico-system(d120af62-419d-4085-83c3-a999c759d842)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"148bba09844664467727ebe7cde55e8e894914ebe77f82441afa462a58ac69c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-p79lm" podUID="d120af62-419d-4085-83c3-a999c759d842" Jun 20 19:19:13.832382 containerd[1580]: time="2025-06-20T19:19:13.832285092Z" level=error msg="Failed to destroy network for sandbox \"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.838968 containerd[1580]: time="2025-06-20T19:19:13.838723920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d8f86898-ftrcc,Uid:b96f406f-076f-4859-a5f7-9af8e0765f82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.839209 kubelet[2735]: E0620 19:19:13.839170 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.839269 kubelet[2735]: E0620 19:19:13.839237 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:13.839269 kubelet[2735]: E0620 19:19:13.839259 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d8f86898-ftrcc" Jun 20 19:19:13.840157 kubelet[2735]: E0620 19:19:13.839416 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54d8f86898-ftrcc_calico-system(b96f406f-076f-4859-a5f7-9af8e0765f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54d8f86898-ftrcc_calico-system(b96f406f-076f-4859-a5f7-9af8e0765f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c2a576cd1b5cc6446e4daa5d5e347291253123b3722569bf6e0a7a3d83325bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54d8f86898-ftrcc" podUID="b96f406f-076f-4859-a5f7-9af8e0765f82" Jun 20 19:19:13.852125 containerd[1580]: time="2025-06-20T19:19:13.852023994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.852462 kubelet[2735]: E0620 19:19:13.852400 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:19:13.852523 kubelet[2735]: E0620 19:19:13.852482 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:19:13.852523 kubelet[2735]: E0620 19:19:13.852503 2735 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jjgb" Jun 20 19:19:13.852610 kubelet[2735]: E0620 19:19:13.852553 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6jjgb_calico-system(98422ba0-fce0-437e-87ec-c2741bdfac3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6jjgb_calico-system(98422ba0-fce0-437e-87ec-c2741bdfac3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3eea7d9eeb23ac6ed0c5e6303f826045aee00df3f8106bd63c2fc21af07632b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6jjgb" podUID="98422ba0-fce0-437e-87ec-c2741bdfac3e" Jun 20 19:19:14.633575 systemd[1]: run-netns-cni\x2dcbec8ff0\x2d4637\x2d83a8\x2d2225\x2de6defe2a2182.mount: Deactivated successfully. Jun 20 19:19:14.790474 kubelet[2735]: I0620 19:19:14.790390 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-backend-key-pair\") pod \"b96f406f-076f-4859-a5f7-9af8e0765f82\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " Jun 20 19:19:14.790474 kubelet[2735]: I0620 19:19:14.790462 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-ca-bundle\") pod \"b96f406f-076f-4859-a5f7-9af8e0765f82\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " Jun 20 19:19:14.790474 kubelet[2735]: I0620 19:19:14.790483 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2hm\" (UniqueName: \"kubernetes.io/projected/b96f406f-076f-4859-a5f7-9af8e0765f82-kube-api-access-sj2hm\") pod \"b96f406f-076f-4859-a5f7-9af8e0765f82\" (UID: \"b96f406f-076f-4859-a5f7-9af8e0765f82\") " Jun 20 19:19:14.792061 kubelet[2735]: I0620 19:19:14.791591 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b96f406f-076f-4859-a5f7-9af8e0765f82" (UID: "b96f406f-076f-4859-a5f7-9af8e0765f82"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:19:14.797449 systemd[1]: var-lib-kubelet-pods-b96f406f\x2d076f\x2d4859\x2da5f7\x2d9af8e0765f82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj2hm.mount: Deactivated successfully. Jun 20 19:19:14.797600 systemd[1]: var-lib-kubelet-pods-b96f406f\x2d076f\x2d4859\x2da5f7\x2d9af8e0765f82-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:19:14.798303 kubelet[2735]: I0620 19:19:14.798263 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96f406f-076f-4859-a5f7-9af8e0765f82-kube-api-access-sj2hm" (OuterVolumeSpecName: "kube-api-access-sj2hm") pod "b96f406f-076f-4859-a5f7-9af8e0765f82" (UID: "b96f406f-076f-4859-a5f7-9af8e0765f82"). InnerVolumeSpecName "kube-api-access-sj2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:19:14.799620 kubelet[2735]: I0620 19:19:14.798268 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b96f406f-076f-4859-a5f7-9af8e0765f82" (UID: "b96f406f-076f-4859-a5f7-9af8e0765f82"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:19:14.821733 containerd[1580]: time="2025-06-20T19:19:14.821679937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" id:\"43cf21c3457f619a73f253b40338e7fbed86b6ea30ca003a29848d0ac0807bfa\" pid:4029 exit_status:1 exited_at:{seconds:1750447154 nanos:821215938}" Jun 20 19:19:14.891740 kubelet[2735]: I0620 19:19:14.891517 2735 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:14.891740 kubelet[2735]: I0620 19:19:14.891560 2735 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96f406f-076f-4859-a5f7-9af8e0765f82-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:14.891740 kubelet[2735]: I0620 19:19:14.891570 2735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2hm\" (UniqueName: \"kubernetes.io/projected/b96f406f-076f-4859-a5f7-9af8e0765f82-kube-api-access-sj2hm\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:15.069472 kubelet[2735]: E0620 19:19:15.069065 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:15.069687 containerd[1580]: time="2025-06-20T19:19:15.069087067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-xj2bn,Uid:ea3da14f-e857-457d-b1b7-a4caf7621c08,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:15.069687 containerd[1580]: time="2025-06-20T19:19:15.069088028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58866ffd4c-nxr8s,Uid:63e4e75a-a49f-4727-a6dc-2d7c2d187722,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:15.069687 containerd[1580]: time="2025-06-20T19:19:15.069292309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdmdr,Uid:c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c,Namespace:kube-system,Attempt:0,}" Jun 20 19:19:15.069687 containerd[1580]: time="2025-06-20T19:19:15.069400467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-5sjqb,Uid:81b12d88-c6b2-47cb-a67c-8cbc122dfaf9,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:15.579707 systemd-networkd[1467]: cali6b80b205046: Link UP Jun 20 19:19:15.580491 systemd-networkd[1467]: cali6b80b205046: Gained carrier Jun 20 19:19:15.597392 kubelet[2735]: I0620 19:19:15.594594 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-57qld" podStartSLOduration=4.253528016 podStartE2EDuration="36.594572475s" podCreationTimestamp="2025-06-20 19:18:39 +0000 UTC" firstStartedPulling="2025-06-20 19:18:39.725416679 +0000 UTC m=+21.755698709" lastFinishedPulling="2025-06-20 19:19:12.066461148 +0000 UTC m=+54.096743168" observedRunningTime="2025-06-20 19:19:15.088753917 +0000 UTC m=+57.119035948" watchObservedRunningTime="2025-06-20 19:19:15.594572475 +0000 UTC m=+57.624854505" Jun 20 19:19:15.598690 containerd[1580]: 2025-06-20 19:19:15.448 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:15.598690 containerd[1580]: 2025-06-20 19:19:15.464 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0 calico-apiserver-847d7f87d- calico-apiserver 81b12d88-c6b2-47cb-a67c-8cbc122dfaf9 919 0 2025-06-20 19:18:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847d7f87d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-847d7f87d-5sjqb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b80b205046 [] [] }} ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-" Jun 20 19:19:15.598690 containerd[1580]: 2025-06-20 19:19:15.464 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.598690 containerd[1580]: 2025-06-20 19:19:15.525 [INFO][4132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" HandleID="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Workload="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" HandleID="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Workload="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-847d7f87d-5sjqb", "timestamp":"2025-06-20 19:19:15.525895424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.538 [INFO][4132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" host="localhost" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.544 [INFO][4132] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.549 [INFO][4132] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.551 [INFO][4132] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.554 [INFO][4132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.598930 containerd[1580]: 2025-06-20 19:19:15.554 [INFO][4132] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" host="localhost" Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.556 [INFO][4132] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99 Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.561 [INFO][4132] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" host="localhost" Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.567 [INFO][4132] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" host="localhost" Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.567 [INFO][4132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" host="localhost" Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.567 [INFO][4132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:15.599215 containerd[1580]: 2025-06-20 19:19:15.567 [INFO][4132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" HandleID="k8s-pod-network.b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Workload="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.599404 containerd[1580]: 2025-06-20 19:19:15.571 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0", GenerateName:"calico-apiserver-847d7f87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"81b12d88-c6b2-47cb-a67c-8cbc122dfaf9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d7f87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-847d7f87d-5sjqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80b205046", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.599477 containerd[1580]: 2025-06-20 19:19:15.571 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.599477 containerd[1580]: 2025-06-20 19:19:15.571 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b80b205046 ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.599477 containerd[1580]: 2025-06-20 19:19:15.580 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.599564 containerd[1580]: 2025-06-20 19:19:15.581 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0", GenerateName:"calico-apiserver-847d7f87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"81b12d88-c6b2-47cb-a67c-8cbc122dfaf9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d7f87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99", Pod:"calico-apiserver-847d7f87d-5sjqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b80b205046", MAC:"1a:6d:cb:2f:28:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.599630 containerd[1580]: 2025-06-20 19:19:15.594 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-5sjqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--5sjqb-eth0" Jun 20 19:19:15.630326 systemd[1]: Removed slice kubepods-besteffort-podb96f406f_076f_4859_a5f7_9af8e0765f82.slice - libcontainer container kubepods-besteffort-podb96f406f_076f_4859_a5f7_9af8e0765f82.slice. Jun 20 19:19:15.710546 systemd-networkd[1467]: calib093d0c69a0: Link UP Jun 20 19:19:15.712152 systemd-networkd[1467]: calib093d0c69a0: Gained carrier Jun 20 19:19:15.746032 systemd[1]: Created slice kubepods-besteffort-pod309ffb27_1d3e_4b78_969a_27160f0cd18e.slice - libcontainer container kubepods-besteffort-pod309ffb27_1d3e_4b78_969a_27160f0cd18e.slice. Jun 20 19:19:15.762869 containerd[1580]: time="2025-06-20T19:19:15.762177098Z" level=info msg="connecting to shim b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99" address="unix:///run/containerd/s/38d05cf6db0b480450b777392955dd9ca33f9dfb097028dd39cf47185024366a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:15.785116 containerd[1580]: 2025-06-20 19:19:15.395 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:15.785116 containerd[1580]: 2025-06-20 19:19:15.414 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0 calico-kube-controllers-58866ffd4c- calico-system 63e4e75a-a49f-4727-a6dc-2d7c2d187722 916 0 2025-06-20 19:18:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58866ffd4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58866ffd4c-nxr8s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib093d0c69a0 [] [] }} ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-" Jun 20 19:19:15.785116 containerd[1580]: 2025-06-20 19:19:15.414 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.785116 containerd[1580]: 2025-06-20 19:19:15.525 [INFO][4115] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" HandleID="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Workload="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4115] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" HandleID="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Workload="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58866ffd4c-nxr8s", "timestamp":"2025-06-20 19:19:15.525740077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.568 [INFO][4115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.568 [INFO][4115] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.640 [INFO][4115] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" host="localhost" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.648 [INFO][4115] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.656 [INFO][4115] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.660 [INFO][4115] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.664 [INFO][4115] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.786668 containerd[1580]: 2025-06-20 19:19:15.664 [INFO][4115] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" host="localhost" Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.668 [INFO][4115] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4 Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.682 [INFO][4115] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" host="localhost" Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.694 [INFO][4115] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" host="localhost" Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.695 [INFO][4115] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" host="localhost" Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.695 [INFO][4115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:15.786921 containerd[1580]: 2025-06-20 19:19:15.695 [INFO][4115] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" HandleID="k8s-pod-network.5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Workload="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.787050 containerd[1580]: 2025-06-20 19:19:15.700 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0", GenerateName:"calico-kube-controllers-58866ffd4c-", Namespace:"calico-system", SelfLink:"", UID:"63e4e75a-a49f-4727-a6dc-2d7c2d187722", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58866ffd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58866ffd4c-nxr8s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib093d0c69a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.787109 containerd[1580]: 2025-06-20 19:19:15.701 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.787109 containerd[1580]: 2025-06-20 19:19:15.701 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib093d0c69a0 ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.787109 containerd[1580]: 2025-06-20 19:19:15.711 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.787169 containerd[1580]: 2025-06-20 19:19:15.713 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0", GenerateName:"calico-kube-controllers-58866ffd4c-", Namespace:"calico-system", SelfLink:"", UID:"63e4e75a-a49f-4727-a6dc-2d7c2d187722", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58866ffd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4", Pod:"calico-kube-controllers-58866ffd4c-nxr8s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib093d0c69a0", MAC:"a2:7d:36:9c:4e:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.787222 containerd[1580]: 2025-06-20 19:19:15.752 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" Namespace="calico-system" Pod="calico-kube-controllers-58866ffd4c-nxr8s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58866ffd4c--nxr8s-eth0" Jun 20 19:19:15.839912 systemd[1]: Started cri-containerd-b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99.scope - libcontainer container b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99. Jun 20 19:19:15.849339 containerd[1580]: time="2025-06-20T19:19:15.849165172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" id:\"e16f68922760340a6e5e828dbdb22f90768a58f40c9714e0ab2c2064c48ad901\" pid:4175 exit_status:1 exited_at:{seconds:1750447155 nanos:848843877}" Jun 20 19:19:15.865330 containerd[1580]: time="2025-06-20T19:19:15.865250012Z" level=info msg="connecting to shim 5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4" address="unix:///run/containerd/s/a37008523b6db5eecbbc81c658de9b0c9479200faf4f2bc09b216fa9789ae838" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:15.872016 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:15.881895 systemd-networkd[1467]: calid418acbe18d: Link UP Jun 20 19:19:15.884681 systemd-networkd[1467]: calid418acbe18d: Gained carrier Jun 20 19:19:15.902656 kubelet[2735]: I0620 19:19:15.901944 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/309ffb27-1d3e-4b78-969a-27160f0cd18e-whisker-backend-key-pair\") pod \"whisker-6788ff7d59-b7tj2\" (UID: \"309ffb27-1d3e-4b78-969a-27160f0cd18e\") " pod="calico-system/whisker-6788ff7d59-b7tj2" Jun 20 19:19:15.902656 kubelet[2735]: I0620 19:19:15.902038 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvjq\" (UniqueName: \"kubernetes.io/projected/309ffb27-1d3e-4b78-969a-27160f0cd18e-kube-api-access-9nvjq\") pod \"whisker-6788ff7d59-b7tj2\" (UID: \"309ffb27-1d3e-4b78-969a-27160f0cd18e\") " pod="calico-system/whisker-6788ff7d59-b7tj2" Jun 20 19:19:15.902656 kubelet[2735]: I0620 19:19:15.902063 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309ffb27-1d3e-4b78-969a-27160f0cd18e-whisker-ca-bundle\") pod \"whisker-6788ff7d59-b7tj2\" (UID: \"309ffb27-1d3e-4b78-969a-27160f0cd18e\") " pod="calico-system/whisker-6788ff7d59-b7tj2" Jun 20 19:19:15.908098 containerd[1580]: 2025-06-20 19:19:15.333 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:15.908098 containerd[1580]: 2025-06-20 19:19:15.409 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0 calico-apiserver-699c44cbf4- calico-apiserver ea3da14f-e857-457d-b1b7-a4caf7621c08 927 0 2025-06-20 19:18:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699c44cbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-699c44cbf4-xj2bn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid418acbe18d [] [] }} ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-" Jun 20 19:19:15.908098 containerd[1580]: 2025-06-20 19:19:15.409 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.908098 containerd[1580]: 2025-06-20 19:19:15.525 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-699c44cbf4-xj2bn", "timestamp":"2025-06-20 19:19:15.525387341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.695 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.695 [INFO][4113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.772 [INFO][4113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" host="localhost" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.799 [INFO][4113] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.821 [INFO][4113] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.833 [INFO][4113] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.839 [INFO][4113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.908822 containerd[1580]: 2025-06-20 19:19:15.839 [INFO][4113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" host="localhost" Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.843 [INFO][4113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44 Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.858 [INFO][4113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" host="localhost" Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.870 [INFO][4113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" host="localhost" Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.870 [INFO][4113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" host="localhost" Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.872 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:15.909155 containerd[1580]: 2025-06-20 19:19:15.872 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.909281 containerd[1580]: 2025-06-20 19:19:15.878 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0", GenerateName:"calico-apiserver-699c44cbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea3da14f-e857-457d-b1b7-a4caf7621c08", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c44cbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-699c44cbf4-xj2bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid418acbe18d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.909465 containerd[1580]: 2025-06-20 19:19:15.878 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.909465 containerd[1580]: 2025-06-20 19:19:15.878 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid418acbe18d ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.909465 containerd[1580]: 2025-06-20 19:19:15.886 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.909576 containerd[1580]: 2025-06-20 19:19:15.888 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0", GenerateName:"calico-apiserver-699c44cbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea3da14f-e857-457d-b1b7-a4caf7621c08", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c44cbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44", Pod:"calico-apiserver-699c44cbf4-xj2bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid418acbe18d", MAC:"5a:d2:95:77:31:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.909628 containerd[1580]: 2025-06-20 19:19:15.901 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-xj2bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:15.922627 systemd[1]: Started cri-containerd-5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4.scope - libcontainer container 5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4. Jun 20 19:19:15.957392 systemd-networkd[1467]: cali2ee5a4c32a0: Link UP Jun 20 19:19:15.963037 systemd-networkd[1467]: cali2ee5a4c32a0: Gained carrier Jun 20 19:19:15.968164 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:15.971017 containerd[1580]: time="2025-06-20T19:19:15.970967742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-5sjqb,Uid:81b12d88-c6b2-47cb-a67c-8cbc122dfaf9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99\"" Jun 20 19:19:15.991253 containerd[1580]: 2025-06-20 19:19:15.454 [INFO][4085] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:15.991253 containerd[1580]: 2025-06-20 19:19:15.474 [INFO][4085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0 coredns-7c65d6cfc9- kube-system c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c 931 0 2025-06-20 19:18:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hdmdr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ee5a4c32a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-" Jun 20 19:19:15.991253 containerd[1580]: 2025-06-20 19:19:15.474 [INFO][4085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.991253 containerd[1580]: 2025-06-20 19:19:15.525 [INFO][4138] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" HandleID="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Workload="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4138] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" HandleID="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Workload="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hdmdr", "timestamp":"2025-06-20 19:19:15.52539213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.526 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.871 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.871 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.883 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" host="localhost" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.896 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.907 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.911 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.914 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:15.991580 containerd[1580]: 2025-06-20 19:19:15.915 [INFO][4138] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" host="localhost" Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.921 [INFO][4138] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.926 [INFO][4138] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" host="localhost" Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.937 [INFO][4138] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" host="localhost" Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.937 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" host="localhost" Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.937 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:15.991788 containerd[1580]: 2025-06-20 19:19:15.937 [INFO][4138] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" HandleID="k8s-pod-network.35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Workload="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.991933 containerd[1580]: 2025-06-20 19:19:15.950 [INFO][4085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hdmdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ee5a4c32a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.992021 containerd[1580]: 2025-06-20 19:19:15.951 [INFO][4085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.992021 containerd[1580]: 2025-06-20 19:19:15.952 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ee5a4c32a0 ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.992021 containerd[1580]: 2025-06-20 19:19:15.965 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.992120 containerd[1580]: 2025-06-20 19:19:15.966 [INFO][4085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec", Pod:"coredns-7c65d6cfc9-hdmdr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ee5a4c32a0", MAC:"ce:ac:57:4f:59:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:15.992120 containerd[1580]: 2025-06-20 19:19:15.980 [INFO][4085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdmdr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hdmdr-eth0" Jun 20 19:19:15.994606 containerd[1580]: time="2025-06-20T19:19:15.994307580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:19:16.025271 containerd[1580]: time="2025-06-20T19:19:16.025121989Z" level=info msg="connecting to shim 131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" address="unix:///run/containerd/s/c127c947fd8af046bbb89e9b831fd66f0c96000dc605de18ba047bbbc0280173" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:16.041833 containerd[1580]: time="2025-06-20T19:19:16.041757337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58866ffd4c-nxr8s,Uid:63e4e75a-a49f-4727-a6dc-2d7c2d187722,Namespace:calico-system,Attempt:0,} returns sandbox id \"5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4\"" Jun 20 19:19:16.042239 containerd[1580]: time="2025-06-20T19:19:16.042192100Z" level=info msg="connecting to shim 35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec" address="unix:///run/containerd/s/e78beaa8cd70e638e86db8fdc4045037cce62b17cf1fa422fc47132df0168ce7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:16.053368 containerd[1580]: time="2025-06-20T19:19:16.053172140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6788ff7d59-b7tj2,Uid:309ffb27-1d3e-4b78-969a-27160f0cd18e,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:16.072400 kubelet[2735]: E0620 19:19:16.072341 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:16.073183 containerd[1580]: time="2025-06-20T19:19:16.073128386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmrmr,Uid:a5f547d6-4a58-43de-8d2b-04d7e42e1086,Namespace:kube-system,Attempt:0,}" Jun 20 19:19:16.075590 systemd[1]: Started cri-containerd-131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44.scope - libcontainer container 131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44. Jun 20 19:19:16.076140 containerd[1580]: time="2025-06-20T19:19:16.075876286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-2kwq5,Uid:031ba079-1aa1-4e85-90ea-f180e62009e8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:16.077286 containerd[1580]: time="2025-06-20T19:19:16.077233876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" id:\"842fd68eab6c6c08a45cc78046be0bbd097fa5876a53348f6a36307c3cdef7cb\" pid:4270 exit_status:1 exited_at:{seconds:1750447156 nanos:73619376}" Jun 20 19:19:16.081724 systemd[1]: Started cri-containerd-35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec.scope - libcontainer container 35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec. Jun 20 19:19:16.085050 kubelet[2735]: I0620 19:19:16.085002 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96f406f-076f-4859-a5f7-9af8e0765f82" path="/var/lib/kubelet/pods/b96f406f-076f-4859-a5f7-9af8e0765f82/volumes" Jun 20 19:19:16.097838 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:16.108491 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:16.160188 containerd[1580]: time="2025-06-20T19:19:16.160031929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdmdr,Uid:c7d7f4ed-f18b-44e9-aa65-bb4db200fe3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec\"" Jun 20 19:19:16.161032 kubelet[2735]: E0620 19:19:16.160932 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:16.174168 containerd[1580]: time="2025-06-20T19:19:16.174110541Z" level=info msg="CreateContainer within sandbox \"35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:19:16.200918 containerd[1580]: time="2025-06-20T19:19:16.200857086Z" level=info msg="Container d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:16.207895 containerd[1580]: time="2025-06-20T19:19:16.207389503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-xj2bn,Uid:ea3da14f-e857-457d-b1b7-a4caf7621c08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\"" Jun 20 19:19:16.217262 containerd[1580]: time="2025-06-20T19:19:16.216848532Z" level=info msg="CreateContainer within sandbox \"35dd311b947554735053ade63cbf19942f4c074b7fd30a7eacf878f072f95dec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1\"" Jun 20 19:19:16.218484 containerd[1580]: time="2025-06-20T19:19:16.218458463Z" level=info msg="StartContainer for \"d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1\"" Jun 20 19:19:16.221205 containerd[1580]: time="2025-06-20T19:19:16.220699113Z" level=info msg="connecting to shim d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1" address="unix:///run/containerd/s/e78beaa8cd70e638e86db8fdc4045037cce62b17cf1fa422fc47132df0168ce7" protocol=ttrpc version=3 Jun 20 19:19:16.258029 systemd-networkd[1467]: calidea4e7e294d: Link UP Jun 20 19:19:16.258792 systemd-networkd[1467]: calidea4e7e294d: Gained carrier Jun 20 19:19:16.262663 systemd[1]: Started cri-containerd-d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1.scope - libcontainer container d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1. Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.111 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.144 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6788ff7d59--b7tj2-eth0 whisker-6788ff7d59- calico-system 309ffb27-1d3e-4b78-969a-27160f0cd18e 1059 0 2025-06-20 19:19:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6788ff7d59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6788ff7d59-b7tj2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidea4e7e294d [] [] }} ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.145 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.192 [INFO][4451] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" HandleID="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Workload="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.192 [INFO][4451] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" HandleID="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Workload="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005164b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6788ff7d59-b7tj2", "timestamp":"2025-06-20 19:19:16.192003065 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.192 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.192 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.192 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.202 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.209 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.215 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.218 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.222 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.223 [INFO][4451] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.225 [INFO][4451] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7 Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.232 [INFO][4451] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.239 [INFO][4451] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.240 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" host="localhost" Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.240 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:16.279535 containerd[1580]: 2025-06-20 19:19:16.240 [INFO][4451] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" HandleID="k8s-pod-network.19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Workload="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.247 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6788ff7d59--b7tj2-eth0", GenerateName:"whisker-6788ff7d59-", Namespace:"calico-system", SelfLink:"", UID:"309ffb27-1d3e-4b78-969a-27160f0cd18e", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6788ff7d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6788ff7d59-b7tj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidea4e7e294d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.248 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.249 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidea4e7e294d ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.263 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.264 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6788ff7d59--b7tj2-eth0", GenerateName:"whisker-6788ff7d59-", Namespace:"calico-system", SelfLink:"", UID:"309ffb27-1d3e-4b78-969a-27160f0cd18e", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6788ff7d59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7", Pod:"whisker-6788ff7d59-b7tj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidea4e7e294d", MAC:"2e:2e:84:34:8d:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.280232 containerd[1580]: 2025-06-20 19:19:16.275 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" Namespace="calico-system" Pod="whisker-6788ff7d59-b7tj2" WorkloadEndpoint="localhost-k8s-whisker--6788ff7d59--b7tj2-eth0" Jun 20 19:19:16.305741 containerd[1580]: time="2025-06-20T19:19:16.305671439Z" level=info msg="connecting to shim 19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7" address="unix:///run/containerd/s/fc9a4388df588c7e26a349cb83badb4ab8b4ab75e27cc67d31e7847e5e3e2621" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:16.338766 containerd[1580]: time="2025-06-20T19:19:16.338679000Z" level=info msg="StartContainer for \"d665b695b01394a9d480f8716b84883df452db2a8c763a15870787288814aaa1\" returns successfully" Jun 20 19:19:16.345569 systemd[1]: Started cri-containerd-19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7.scope - libcontainer container 19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7. Jun 20 19:19:16.364419 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:16.415678 containerd[1580]: time="2025-06-20T19:19:16.415601124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6788ff7d59-b7tj2,Uid:309ffb27-1d3e-4b78-969a-27160f0cd18e,Namespace:calico-system,Attempt:0,} returns sandbox id \"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7\"" Jun 20 19:19:16.459350 systemd-networkd[1467]: calib9da04a25ba: Link UP Jun 20 19:19:16.459648 systemd-networkd[1467]: calib9da04a25ba: Gained carrier Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.167 [INFO][4420] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.188 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0 coredns-7c65d6cfc9- kube-system a5f547d6-4a58-43de-8d2b-04d7e42e1086 930 0 2025-06-20 19:18:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dmrmr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9da04a25ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.188 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.246 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" HandleID="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Workload="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.246 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" HandleID="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Workload="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dmrmr", "timestamp":"2025-06-20 19:19:16.246628674 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.247 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.247 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.247 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.304 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.312 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.320 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.328 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.331 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.331 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.334 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199 Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.363 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.437 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.437 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" host="localhost" Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.437 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:16.491608 containerd[1580]: 2025-06-20 19:19:16.437 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" HandleID="k8s-pod-network.36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Workload="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.445 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5f547d6-4a58-43de-8d2b-04d7e42e1086", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dmrmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9da04a25ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.446 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.446 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9da04a25ba ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.461 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.462 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5f547d6-4a58-43de-8d2b-04d7e42e1086", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199", Pod:"coredns-7c65d6cfc9-dmrmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9da04a25ba", MAC:"de:2f:d2:a3:e3:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.493786 containerd[1580]: 2025-06-20 19:19:16.487 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dmrmr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dmrmr-eth0" Jun 20 19:19:16.564055 containerd[1580]: time="2025-06-20T19:19:16.563933590Z" level=info msg="connecting to shim 36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199" address="unix:///run/containerd/s/eb279f0c5c91611597522b501286294191cafb837844b16b7e4a2b0be31a1602" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:16.569998 systemd-networkd[1467]: cali87fe69c508e: Link UP Jun 20 19:19:16.571528 systemd-networkd[1467]: cali87fe69c508e: Gained carrier Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.194 [INFO][4436] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.209 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0 calico-apiserver-699c44cbf4- calico-apiserver 031ba079-1aa1-4e85-90ea-f180e62009e8 932 0 2025-06-20 19:18:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699c44cbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-699c44cbf4-2kwq5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87fe69c508e [] [] }} ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.209 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.263 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.263 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005852c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-699c44cbf4-2kwq5", "timestamp":"2025-06-20 19:19:16.263452061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.263 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.438 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.439 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.456 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.490 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.500 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.504 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.512 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.512 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.515 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48 Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.531 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.542 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.545 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" host="localhost" Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.545 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:16.608462 containerd[1580]: 2025-06-20 19:19:16.545 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.554 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0", GenerateName:"calico-apiserver-699c44cbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"031ba079-1aa1-4e85-90ea-f180e62009e8", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c44cbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-699c44cbf4-2kwq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fe69c508e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.557 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.558 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87fe69c508e ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.571 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.571 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0", GenerateName:"calico-apiserver-699c44cbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"031ba079-1aa1-4e85-90ea-f180e62009e8", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699c44cbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48", Pod:"calico-apiserver-699c44cbf4-2kwq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87fe69c508e", MAC:"2e:43:ba:8c:05:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:16.609655 containerd[1580]: 2025-06-20 19:19:16.588 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Namespace="calico-apiserver" Pod="calico-apiserver-699c44cbf4-2kwq5" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:16.636672 kubelet[2735]: E0620 19:19:16.635365 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:16.681515 systemd[1]: Started cri-containerd-36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199.scope - libcontainer container 36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199. Jun 20 19:19:16.706595 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:16.829968 kubelet[2735]: I0620 19:19:16.829719 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hdmdr" podStartSLOduration=54.829693233 podStartE2EDuration="54.829693233s" podCreationTimestamp="2025-06-20 19:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:19:16.828147124 +0000 UTC m=+58.858429164" watchObservedRunningTime="2025-06-20 19:19:16.829693233 +0000 UTC m=+58.859975264" Jun 20 19:19:16.880112 containerd[1580]: time="2025-06-20T19:19:16.879996047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmrmr,Uid:a5f547d6-4a58-43de-8d2b-04d7e42e1086,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199\"" Jun 20 19:19:16.882103 kubelet[2735]: E0620 19:19:16.882072 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:16.889743 containerd[1580]: time="2025-06-20T19:19:16.889555868Z" level=info msg="CreateContainer within sandbox \"36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:19:16.949512 systemd-networkd[1467]: calib093d0c69a0: Gained IPv6LL Jun 20 19:19:17.206661 systemd-networkd[1467]: cali2ee5a4c32a0: Gained IPv6LL Jun 20 19:19:17.207046 systemd-networkd[1467]: cali6b80b205046: Gained IPv6LL Jun 20 19:19:17.230509 systemd-networkd[1467]: vxlan.calico: Link UP Jun 20 19:19:17.230521 systemd-networkd[1467]: vxlan.calico: Gained carrier Jun 20 19:19:17.333344 containerd[1580]: time="2025-06-20T19:19:17.330525826Z" level=info msg="Container 0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:17.337073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811990988.mount: Deactivated successfully. Jun 20 19:19:17.389528 containerd[1580]: time="2025-06-20T19:19:17.389474562Z" level=info msg="connecting to shim 28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" address="unix:///run/containerd/s/637ee73906e4c4966fdbd81f6da34c0d01c27ca8c6e490be95f7483549951bb0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:17.431776 systemd[1]: Started cri-containerd-28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48.scope - libcontainer container 28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48. Jun 20 19:19:17.437714 containerd[1580]: time="2025-06-20T19:19:17.437666295Z" level=info msg="CreateContainer within sandbox \"36b5a2d6bdacff6dfe26f2b9f686bc49cca4a457533c12e95767c074a9948199\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c\"" Jun 20 19:19:17.445615 containerd[1580]: time="2025-06-20T19:19:17.445425833Z" level=info msg="StartContainer for \"0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c\"" Jun 20 19:19:17.446548 containerd[1580]: time="2025-06-20T19:19:17.446505138Z" level=info msg="connecting to shim 0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c" address="unix:///run/containerd/s/eb279f0c5c91611597522b501286294191cafb837844b16b7e4a2b0be31a1602" protocol=ttrpc version=3 Jun 20 19:19:17.457171 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:17.485559 systemd[1]: Started cri-containerd-0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c.scope - libcontainer container 0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c. Jun 20 19:19:17.589594 systemd-networkd[1467]: calidea4e7e294d: Gained IPv6LL Jun 20 19:19:17.845470 systemd-networkd[1467]: cali87fe69c508e: Gained IPv6LL Jun 20 19:19:17.909539 systemd-networkd[1467]: calid418acbe18d: Gained IPv6LL Jun 20 19:19:17.924524 containerd[1580]: time="2025-06-20T19:19:17.924453352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699c44cbf4-2kwq5,Uid:031ba079-1aa1-4e85-90ea-f180e62009e8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\"" Jun 20 19:19:17.926199 containerd[1580]: time="2025-06-20T19:19:17.926040909Z" level=info msg="StartContainer for \"0bb4ce4d3642bb61a57687b3a37a126a6f324cb8b1059fde45492c5468d2775c\" returns successfully" Jun 20 19:19:17.928073 kubelet[2735]: E0620 19:19:17.928017 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:18.038489 systemd-networkd[1467]: calib9da04a25ba: Gained IPv6LL Jun 20 19:19:18.485537 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL Jun 20 19:19:18.820575 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:58216.service - OpenSSH per-connection server daemon (10.0.0.1:58216). Jun 20 19:19:18.900887 sshd[4932]: Accepted publickey for core from 10.0.0.1 port 58216 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:18.902598 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:18.907696 systemd-logind[1515]: New session 9 of user core. Jun 20 19:19:18.922566 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:19:18.930987 kubelet[2735]: E0620 19:19:18.930952 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:18.930987 kubelet[2735]: E0620 19:19:18.930967 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:18.999724 kubelet[2735]: I0620 19:19:18.999637 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmrmr" podStartSLOduration=56.999618629 podStartE2EDuration="56.999618629s" podCreationTimestamp="2025-06-20 19:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:19:18.998784244 +0000 UTC m=+61.029066274" watchObservedRunningTime="2025-06-20 19:19:18.999618629 +0000 UTC m=+61.029900659" Jun 20 19:19:19.073881 sshd[4936]: Connection closed by 10.0.0.1 port 58216 Jun 20 19:19:19.074119 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:19.078788 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:58216.service: Deactivated successfully. Jun 20 19:19:19.081251 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:19:19.082236 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:19:19.083602 systemd-logind[1515]: Removed session 9. Jun 20 19:19:20.278251 containerd[1580]: time="2025-06-20T19:19:20.278143001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:20.279140 containerd[1580]: time="2025-06-20T19:19:20.279060764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:19:20.280620 containerd[1580]: time="2025-06-20T19:19:20.280573434Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:20.283498 containerd[1580]: time="2025-06-20T19:19:20.283395867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:20.284285 containerd[1580]: time="2025-06-20T19:19:20.284251231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 4.289878205s" Jun 20 19:19:20.284285 containerd[1580]: time="2025-06-20T19:19:20.284283543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:19:20.285201 containerd[1580]: time="2025-06-20T19:19:20.285163895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:19:20.291955 containerd[1580]: time="2025-06-20T19:19:20.291902319Z" level=info msg="CreateContainer within sandbox \"b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:19:20.303556 containerd[1580]: time="2025-06-20T19:19:20.303480572Z" level=info msg="Container 28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:20.356732 containerd[1580]: time="2025-06-20T19:19:20.356659167Z" level=info msg="CreateContainer within sandbox \"b99c0f7db435cace70377976514a1383fa35fdb5ec1d0fb83f22a5bb108b4a99\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2\"" Jun 20 19:19:20.357337 containerd[1580]: time="2025-06-20T19:19:20.357287668Z" level=info msg="StartContainer for \"28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2\"" Jun 20 19:19:20.358696 containerd[1580]: time="2025-06-20T19:19:20.358623861Z" level=info msg="connecting to shim 28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2" address="unix:///run/containerd/s/38d05cf6db0b480450b777392955dd9ca33f9dfb097028dd39cf47185024366a" protocol=ttrpc version=3 Jun 20 19:19:20.407532 systemd[1]: Started cri-containerd-28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2.scope - libcontainer container 28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2. Jun 20 19:19:20.465171 containerd[1580]: time="2025-06-20T19:19:20.465124035Z" level=info msg="StartContainer for \"28a9080a2cb9d308babced270d5eec5c83853ad45569350c405073a9d86093b2\" returns successfully" Jun 20 19:19:20.846963 kubelet[2735]: E0620 19:19:20.846880 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:20.952529 kubelet[2735]: E0620 19:19:20.952473 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:21.031585 kubelet[2735]: I0620 19:19:21.031387 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847d7f87d-5sjqb" podStartSLOduration=41.74025248 podStartE2EDuration="46.031363844s" podCreationTimestamp="2025-06-20 19:18:35 +0000 UTC" firstStartedPulling="2025-06-20 19:19:15.993962348 +0000 UTC m=+58.024244378" lastFinishedPulling="2025-06-20 19:19:20.285073712 +0000 UTC m=+62.315355742" observedRunningTime="2025-06-20 19:19:21.014140511 +0000 UTC m=+63.044422541" watchObservedRunningTime="2025-06-20 19:19:21.031363844 +0000 UTC m=+63.061645874" Jun 20 19:19:21.953561 kubelet[2735]: I0620 19:19:21.953510 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:19:24.090521 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:52932.service - OpenSSH per-connection server daemon (10.0.0.1:52932). Jun 20 19:19:24.306660 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 52932 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:24.308261 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:24.313045 systemd-logind[1515]: New session 10 of user core. Jun 20 19:19:24.319486 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:19:24.592706 sshd[5014]: Connection closed by 10.0.0.1 port 52932 Jun 20 19:19:24.593085 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:24.597925 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:52932.service: Deactivated successfully. Jun 20 19:19:24.600052 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:19:24.601036 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:19:24.602814 systemd-logind[1515]: Removed session 10. Jun 20 19:19:25.758688 containerd[1580]: time="2025-06-20T19:19:25.758605626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:25.759771 containerd[1580]: time="2025-06-20T19:19:25.759711654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:19:25.761268 containerd[1580]: time="2025-06-20T19:19:25.761234127Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:25.763601 containerd[1580]: time="2025-06-20T19:19:25.763547238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:25.764215 containerd[1580]: time="2025-06-20T19:19:25.764175044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 5.478981373s" Jun 20 19:19:25.764261 containerd[1580]: time="2025-06-20T19:19:25.764213197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:19:25.765269 containerd[1580]: time="2025-06-20T19:19:25.765238241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:19:25.775301 containerd[1580]: time="2025-06-20T19:19:25.775231053Z" level=info msg="CreateContainer within sandbox \"5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:19:25.799017 containerd[1580]: time="2025-06-20T19:19:25.798935235Z" level=info msg="Container d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:25.812650 containerd[1580]: time="2025-06-20T19:19:25.812562827Z" level=info msg="CreateContainer within sandbox \"5394a68598b6d3f968e683e3ded8a2662497db940d7121f2aaba48c30519e6b4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\"" Jun 20 19:19:25.813559 containerd[1580]: time="2025-06-20T19:19:25.813527706Z" level=info msg="StartContainer for \"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\"" Jun 20 19:19:25.815292 containerd[1580]: time="2025-06-20T19:19:25.815240241Z" level=info msg="connecting to shim d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664" address="unix:///run/containerd/s/a37008523b6db5eecbbc81c658de9b0c9479200faf4f2bc09b216fa9789ae838" protocol=ttrpc version=3 Jun 20 19:19:25.851600 systemd[1]: Started cri-containerd-d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664.scope - libcontainer container d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664. Jun 20 19:19:26.355144 containerd[1580]: time="2025-06-20T19:19:26.355090525Z" level=info msg="StartContainer for \"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" returns successfully" Jun 20 19:19:26.835734 containerd[1580]: time="2025-06-20T19:19:26.835665945Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:26.871196 containerd[1580]: time="2025-06-20T19:19:26.871117107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:19:26.873847 containerd[1580]: time="2025-06-20T19:19:26.873794170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 1.108521172s" Jun 20 19:19:26.873847 containerd[1580]: time="2025-06-20T19:19:26.873843904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:19:26.874943 containerd[1580]: time="2025-06-20T19:19:26.874917810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:19:26.876866 containerd[1580]: time="2025-06-20T19:19:26.876821318Z" level=info msg="CreateContainer within sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:19:26.934494 containerd[1580]: time="2025-06-20T19:19:26.934424697Z" level=info msg="Container 9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:27.019769 containerd[1580]: time="2025-06-20T19:19:27.019693086Z" level=info msg="CreateContainer within sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\"" Jun 20 19:19:27.020686 containerd[1580]: time="2025-06-20T19:19:27.020350749Z" level=info msg="StartContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\"" Jun 20 19:19:27.021767 containerd[1580]: time="2025-06-20T19:19:27.021708716Z" level=info msg="connecting to shim 9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c" address="unix:///run/containerd/s/c127c947fd8af046bbb89e9b831fd66f0c96000dc605de18ba047bbbc0280173" protocol=ttrpc version=3 Jun 20 19:19:27.052667 systemd[1]: Started cri-containerd-9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c.scope - libcontainer container 9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c. Jun 20 19:19:27.206692 containerd[1580]: time="2025-06-20T19:19:27.206495223Z" level=info msg="StartContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" returns successfully" Jun 20 19:19:27.409196 containerd[1580]: time="2025-06-20T19:19:27.409132738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" id:\"c99f7ae2a64c6e6fcd3622a99f2f055586644f09495f9d729e74022e82381e9f\" pid:5124 exited_at:{seconds:1750447167 nanos:408604642}" Jun 20 19:19:27.714740 kubelet[2735]: I0620 19:19:27.714651 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58866ffd4c-nxr8s" podStartSLOduration=38.994003255 podStartE2EDuration="48.71463155s" podCreationTimestamp="2025-06-20 19:18:39 +0000 UTC" firstStartedPulling="2025-06-20 19:19:16.044356934 +0000 UTC m=+58.074638964" lastFinishedPulling="2025-06-20 19:19:25.764985229 +0000 UTC m=+67.795267259" observedRunningTime="2025-06-20 19:19:27.714193164 +0000 UTC m=+69.744475204" watchObservedRunningTime="2025-06-20 19:19:27.71463155 +0000 UTC m=+69.744913580" Jun 20 19:19:28.068927 containerd[1580]: time="2025-06-20T19:19:28.068845496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:28.069451 containerd[1580]: time="2025-06-20T19:19:28.069158593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,}" Jun 20 19:19:28.314531 kubelet[2735]: I0620 19:19:28.314433 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699c44cbf4-xj2bn" podStartSLOduration=43.649589322 podStartE2EDuration="54.314285259s" podCreationTimestamp="2025-06-20 19:18:34 +0000 UTC" firstStartedPulling="2025-06-20 19:19:16.210075996 +0000 UTC m=+58.240358026" lastFinishedPulling="2025-06-20 19:19:26.874771933 +0000 UTC m=+68.905053963" observedRunningTime="2025-06-20 19:19:28.313257492 +0000 UTC m=+70.343539522" watchObservedRunningTime="2025-06-20 19:19:28.314285259 +0000 UTC m=+70.344567279" Jun 20 19:19:28.360768 kubelet[2735]: I0620 19:19:28.360633 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:19:28.820627 systemd-networkd[1467]: cali5e986173693: Link UP Jun 20 19:19:28.822094 systemd-networkd[1467]: cali5e986173693: Gained carrier Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.585 [INFO][5135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6jjgb-eth0 csi-node-driver- calico-system 98422ba0-fce0-437e-87ec-c2741bdfac3e 779 0 2025-06-20 19:18:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:896496fb5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6jjgb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5e986173693 [] [] }} ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.610 [INFO][5135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.753 [INFO][5150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" HandleID="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Workload="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.754 [INFO][5150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" HandleID="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Workload="localhost-k8s-csi--node--driver--6jjgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6jjgb", "timestamp":"2025-06-20 19:19:28.753850093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.754 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.754 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.754 [INFO][5150] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.765 [INFO][5150] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.771 [INFO][5150] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.776 [INFO][5150] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.782 [INFO][5150] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.788 [INFO][5150] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.788 [INFO][5150] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.791 [INFO][5150] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.798 [INFO][5150] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.812 [INFO][5150] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.812 [INFO][5150] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" host="localhost" Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.812 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:28.843690 containerd[1580]: 2025-06-20 19:19:28.812 [INFO][5150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" HandleID="k8s-pod-network.fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Workload="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.817 [INFO][5135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6jjgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98422ba0-fce0-437e-87ec-c2741bdfac3e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6jjgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e986173693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.817 [INFO][5135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.817 [INFO][5135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e986173693 ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.821 [INFO][5135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.821 [INFO][5135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6jjgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98422ba0-fce0-437e-87ec-c2741bdfac3e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc", Pod:"csi-node-driver-6jjgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e986173693", MAC:"42:95:6e:62:5d:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:28.845348 containerd[1580]: 2025-06-20 19:19:28.838 [INFO][5135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" Namespace="calico-system" Pod="csi-node-driver-6jjgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jjgb-eth0" Jun 20 19:19:29.068784 kubelet[2735]: E0620 19:19:29.068717 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:29.604867 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:52940.service - OpenSSH per-connection server daemon (10.0.0.1:52940). Jun 20 19:19:29.672575 sshd[5200]: Accepted publickey for core from 10.0.0.1 port 52940 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:29.674638 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:29.687902 systemd-logind[1515]: New session 11 of user core. Jun 20 19:19:29.696446 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:19:29.767810 systemd-networkd[1467]: cali8594799c845: Link UP Jun 20 19:19:29.768457 systemd-networkd[1467]: cali8594799c845: Gained carrier Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.769 [INFO][5156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--dc7b455cb--p79lm-eth0 goldmane-dc7b455cb- calico-system d120af62-419d-4085-83c3-a999c759d842 924 0 2025-06-20 19:18:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:dc7b455cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-dc7b455cb-p79lm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8594799c845 [] [] }} ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.769 [INFO][5156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.819 [INFO][5174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" HandleID="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Workload="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.820 [INFO][5174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" HandleID="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Workload="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-dc7b455cb-p79lm", "timestamp":"2025-06-20 19:19:28.819485947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.821 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.821 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.821 [INFO][5174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.867 [INFO][5174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:28.891 [INFO][5174] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.162 [INFO][5174] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.390 [INFO][5174] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.392 [INFO][5174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.392 [INFO][5174] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.472 [INFO][5174] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.597 [INFO][5174] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.760 [INFO][5174] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.760 [INFO][5174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" host="localhost" Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.760 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:29.945628 containerd[1580]: 2025-06-20 19:19:29.760 [INFO][5174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" HandleID="k8s-pod-network.ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Workload="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.764 [INFO][5156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--p79lm-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"d120af62-419d-4085-83c3-a999c759d842", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-dc7b455cb-p79lm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8594799c845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.765 [INFO][5156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.765 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8594799c845 ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.768 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.770 [INFO][5156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--p79lm-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"d120af62-419d-4085-83c3-a999c759d842", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 18, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c", Pod:"goldmane-dc7b455cb-p79lm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8594799c845", MAC:"5e:da:52:4c:eb:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:29.957934 containerd[1580]: 2025-06-20 19:19:29.937 [INFO][5156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" Namespace="calico-system" Pod="goldmane-dc7b455cb-p79lm" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--p79lm-eth0" Jun 20 19:19:29.984428 containerd[1580]: time="2025-06-20T19:19:29.984372776Z" level=info msg="connecting to shim fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc" address="unix:///run/containerd/s/ff8ac1136328ab53e9d613201a0f507d457a06abf04778d07fd6c94c989e1f37" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:30.005476 sshd[5203]: Connection closed by 10.0.0.1 port 52940 Jun 20 19:19:30.004522 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:30.009832 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:52940.service: Deactivated successfully. Jun 20 19:19:30.012056 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:19:30.015084 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:19:30.023494 systemd[1]: Started cri-containerd-fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc.scope - libcontainer container fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc. Jun 20 19:19:30.024750 systemd-logind[1515]: Removed session 11. Jun 20 19:19:30.038554 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:30.208406 containerd[1580]: time="2025-06-20T19:19:30.208219659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jjgb,Uid:98422ba0-fce0-437e-87ec-c2741bdfac3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc\"" Jun 20 19:19:30.389492 systemd-networkd[1467]: cali5e986173693: Gained IPv6LL Jun 20 19:19:30.488399 containerd[1580]: time="2025-06-20T19:19:30.488145701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" id:\"2076ac9d577e80b5b09c8dd1cf244294926238bce494a353d17765159183da75\" pid:5284 exited_at:{seconds:1750447170 nanos:487918308}" Jun 20 19:19:30.985666 containerd[1580]: time="2025-06-20T19:19:30.985531470Z" level=info msg="connecting to shim ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c" address="unix:///run/containerd/s/7d6621eb1b58a731d3061461c052f54b9e309066a30b96d30c081299277a1037" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:31.022850 systemd[1]: Started cri-containerd-ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c.scope - libcontainer container ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c. Jun 20 19:19:31.076975 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:31.118273 containerd[1580]: time="2025-06-20T19:19:31.118017087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-p79lm,Uid:d120af62-419d-4085-83c3-a999c759d842,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c\"" Jun 20 19:19:31.463548 kubelet[2735]: I0620 19:19:31.463492 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:19:31.478790 systemd-networkd[1467]: cali8594799c845: Gained IPv6LL Jun 20 19:19:32.068414 kubelet[2735]: E0620 19:19:32.068339 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:32.124191 systemd[1]: Created slice kubepods-besteffort-pod37bd4181_038e_408c_b558_de113393f32c.slice - libcontainer container kubepods-besteffort-pod37bd4181_038e_408c_b558_de113393f32c.slice. Jun 20 19:19:32.197114 containerd[1580]: time="2025-06-20T19:19:32.197040214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:32.198069 containerd[1580]: time="2025-06-20T19:19:32.198005740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:19:32.199602 containerd[1580]: time="2025-06-20T19:19:32.199533716Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:32.202221 containerd[1580]: time="2025-06-20T19:19:32.202175089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:32.202842 containerd[1580]: time="2025-06-20T19:19:32.202797633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 5.32774732s" Jun 20 19:19:32.202842 containerd[1580]: time="2025-06-20T19:19:32.202835104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:19:32.213173 containerd[1580]: time="2025-06-20T19:19:32.212923189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:19:32.214619 containerd[1580]: time="2025-06-20T19:19:32.214577715Z" level=info msg="CreateContainer within sandbox \"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:19:32.225375 containerd[1580]: time="2025-06-20T19:19:32.225292272Z" level=info msg="Container 398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:32.236643 containerd[1580]: time="2025-06-20T19:19:32.236575971Z" level=info msg="CreateContainer within sandbox \"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b\"" Jun 20 19:19:32.237452 containerd[1580]: time="2025-06-20T19:19:32.237412392Z" level=info msg="StartContainer for \"398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b\"" Jun 20 19:19:32.239411 containerd[1580]: time="2025-06-20T19:19:32.239301995Z" level=info msg="connecting to shim 398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b" address="unix:///run/containerd/s/fc9a4388df588c7e26a349cb83badb4ab8b4ab75e27cc67d31e7847e5e3e2621" protocol=ttrpc version=3 Jun 20 19:19:32.281640 systemd[1]: Started cri-containerd-398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b.scope - libcontainer container 398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b. Jun 20 19:19:32.310194 kubelet[2735]: I0620 19:19:32.310095 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq29\" (UniqueName: \"kubernetes.io/projected/37bd4181-038e-408c-b558-de113393f32c-kube-api-access-2gq29\") pod \"calico-apiserver-847d7f87d-mfvld\" (UID: \"37bd4181-038e-408c-b558-de113393f32c\") " pod="calico-apiserver/calico-apiserver-847d7f87d-mfvld" Jun 20 19:19:32.310194 kubelet[2735]: I0620 19:19:32.310181 2735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37bd4181-038e-408c-b558-de113393f32c-calico-apiserver-certs\") pod \"calico-apiserver-847d7f87d-mfvld\" (UID: \"37bd4181-038e-408c-b558-de113393f32c\") " pod="calico-apiserver/calico-apiserver-847d7f87d-mfvld" Jun 20 19:19:32.431275 containerd[1580]: time="2025-06-20T19:19:32.431110296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-mfvld,Uid:37bd4181-038e-408c-b558-de113393f32c,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:19:32.507273 containerd[1580]: time="2025-06-20T19:19:32.507217944Z" level=info msg="StartContainer for \"398636e66e88ac5d75302f75d7a0be5d1fc272494ebea245a0a86addc0f4255b\" returns successfully" Jun 20 19:19:32.818397 containerd[1580]: time="2025-06-20T19:19:32.818297440Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:32.819647 containerd[1580]: time="2025-06-20T19:19:32.819618632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:19:32.835771 containerd[1580]: time="2025-06-20T19:19:32.835706838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 622.740586ms" Jun 20 19:19:32.835771 containerd[1580]: time="2025-06-20T19:19:32.835767452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:19:32.838574 containerd[1580]: time="2025-06-20T19:19:32.837515848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:19:32.838741 containerd[1580]: time="2025-06-20T19:19:32.838503326Z" level=info msg="CreateContainer within sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:19:32.853427 containerd[1580]: time="2025-06-20T19:19:32.853359809Z" level=info msg="Container 23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:32.858867 systemd-networkd[1467]: cali9ffea44bc34: Link UP Jun 20 19:19:32.861577 systemd-networkd[1467]: cali9ffea44bc34: Gained carrier Jun 20 19:19:32.870283 containerd[1580]: time="2025-06-20T19:19:32.870131876Z" level=info msg="CreateContainer within sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\"" Jun 20 19:19:32.872358 containerd[1580]: time="2025-06-20T19:19:32.871443650Z" level=info msg="StartContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\"" Jun 20 19:19:32.880240 containerd[1580]: time="2025-06-20T19:19:32.879380003Z" level=info msg="connecting to shim 23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9" address="unix:///run/containerd/s/637ee73906e4c4966fdbd81f6da34c0d01c27ca8c6e490be95f7483549951bb0" protocol=ttrpc version=3 Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.760 [INFO][5383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0 calico-apiserver-847d7f87d- calico-apiserver 37bd4181-038e-408c-b558-de113393f32c 1242 0 2025-06-20 19:19:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847d7f87d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-847d7f87d-mfvld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ffea44bc34 [] [] }} ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.765 [INFO][5383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.810 [INFO][5397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" HandleID="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Workload="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.810 [INFO][5397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" HandleID="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Workload="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000486540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-847d7f87d-mfvld", "timestamp":"2025-06-20 19:19:32.810608326 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.810 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.811 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.811 [INFO][5397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.818 [INFO][5397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.824 [INFO][5397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.828 [INFO][5397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.831 [INFO][5397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.833 [INFO][5397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.833 [INFO][5397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.835 [INFO][5397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51 Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.841 [INFO][5397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.851 [INFO][5397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.851 [INFO][5397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" host="localhost" Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.851 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:32.889612 containerd[1580]: 2025-06-20 19:19:32.851 [INFO][5397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" HandleID="k8s-pod-network.5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Workload="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.855 [INFO][5383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0", GenerateName:"calico-apiserver-847d7f87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"37bd4181-038e-408c-b558-de113393f32c", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d7f87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-847d7f87d-mfvld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ffea44bc34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.856 [INFO][5383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.856 [INFO][5383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ffea44bc34 ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.862 [INFO][5383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.863 [INFO][5383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0", GenerateName:"calico-apiserver-847d7f87d-", Namespace:"calico-apiserver", SelfLink:"", UID:"37bd4181-038e-408c-b558-de113393f32c", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 19, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847d7f87d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51", Pod:"calico-apiserver-847d7f87d-mfvld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ffea44bc34", MAC:"06:0b:5b:78:e3:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:19:32.890502 containerd[1580]: 2025-06-20 19:19:32.878 [INFO][5383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" Namespace="calico-apiserver" Pod="calico-apiserver-847d7f87d-mfvld" WorkloadEndpoint="localhost-k8s-calico--apiserver--847d7f87d--mfvld-eth0" Jun 20 19:19:32.919054 systemd[1]: Started cri-containerd-23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9.scope - libcontainer container 23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9. Jun 20 19:19:32.933917 containerd[1580]: time="2025-06-20T19:19:32.933847587Z" level=info msg="connecting to shim 5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51" address="unix:///run/containerd/s/59707c600a25f95525643360126aa48cd8aa74ae50a364c4c9e5ba5018b03f4e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:19:32.972596 systemd[1]: Started cri-containerd-5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51.scope - libcontainer container 5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51. Jun 20 19:19:32.991191 containerd[1580]: time="2025-06-20T19:19:32.991044802Z" level=info msg="StartContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" returns successfully" Jun 20 19:19:32.992395 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:19:33.048237 containerd[1580]: time="2025-06-20T19:19:33.048111259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847d7f87d-mfvld,Uid:37bd4181-038e-408c-b558-de113393f32c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51\"" Jun 20 19:19:33.051636 containerd[1580]: time="2025-06-20T19:19:33.051610021Z" level=info msg="CreateContainer within sandbox \"5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:19:33.062984 containerd[1580]: time="2025-06-20T19:19:33.062925023Z" level=info msg="Container 206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:33.076481 containerd[1580]: time="2025-06-20T19:19:33.075427742Z" level=info msg="CreateContainer within sandbox \"5043cc55c8ff814ec2524d7bc9908d21c3c4691255f57222ebef355e783ecc51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18\"" Jun 20 19:19:33.076481 containerd[1580]: time="2025-06-20T19:19:33.076239666Z" level=info msg="StartContainer for \"206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18\"" Jun 20 19:19:33.077697 containerd[1580]: time="2025-06-20T19:19:33.077632193Z" level=info msg="connecting to shim 206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18" address="unix:///run/containerd/s/59707c600a25f95525643360126aa48cd8aa74ae50a364c4c9e5ba5018b03f4e" protocol=ttrpc version=3 Jun 20 19:19:33.108133 systemd[1]: Started cri-containerd-206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18.scope - libcontainer container 206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18. Jun 20 19:19:33.176472 containerd[1580]: time="2025-06-20T19:19:33.176413942Z" level=info msg="StartContainer for \"206a23ac1d6031d107939aed393de1e594e1bf9df41c2c381c3e744f36c21e18\" returns successfully" Jun 20 19:19:33.540602 kubelet[2735]: I0620 19:19:33.540485 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699c44cbf4-2kwq5" podStartSLOduration=44.629738383 podStartE2EDuration="59.540463645s" podCreationTimestamp="2025-06-20 19:18:34 +0000 UTC" firstStartedPulling="2025-06-20 19:19:17.926134307 +0000 UTC m=+59.956416338" lastFinishedPulling="2025-06-20 19:19:32.83685957 +0000 UTC m=+74.867141600" observedRunningTime="2025-06-20 19:19:33.539581327 +0000 UTC m=+75.569863367" watchObservedRunningTime="2025-06-20 19:19:33.540463645 +0000 UTC m=+75.570745685" Jun 20 19:19:33.549227 containerd[1580]: time="2025-06-20T19:19:33.549164757Z" level=info msg="StopContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" with timeout 30 (s)" Jun 20 19:19:33.561377 containerd[1580]: time="2025-06-20T19:19:33.561303775Z" level=info msg="Stop container \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" with signal terminated" Jun 20 19:19:33.584538 systemd[1]: cri-containerd-23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9.scope: Deactivated successfully. Jun 20 19:19:33.586274 containerd[1580]: time="2025-06-20T19:19:33.586193384Z" level=info msg="received exit event container_id:\"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" id:\"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" pid:5432 exit_status:1 exited_at:{seconds:1750447173 nanos:585356303}" Jun 20 19:19:33.587258 containerd[1580]: time="2025-06-20T19:19:33.587200719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" id:\"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" pid:5432 exit_status:1 exited_at:{seconds:1750447173 nanos:585356303}" Jun 20 19:19:33.625289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9-rootfs.mount: Deactivated successfully. Jun 20 19:19:34.485565 systemd-networkd[1467]: cali9ffea44bc34: Gained IPv6LL Jun 20 19:19:34.787883 containerd[1580]: time="2025-06-20T19:19:34.787774101Z" level=info msg="StopContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" returns successfully" Jun 20 19:19:34.788451 containerd[1580]: time="2025-06-20T19:19:34.788416192Z" level=info msg="StopPodSandbox for \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\"" Jun 20 19:19:34.794406 containerd[1580]: time="2025-06-20T19:19:34.794364557Z" level=info msg="Container to stop \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:19:34.802381 systemd[1]: cri-containerd-28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48.scope: Deactivated successfully. Jun 20 19:19:34.803838 containerd[1580]: time="2025-06-20T19:19:34.803799799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" id:\"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" pid:4835 exit_status:137 exited_at:{seconds:1750447174 nanos:803536378}" Jun 20 19:19:34.838855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48-rootfs.mount: Deactivated successfully. Jun 20 19:19:35.017925 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:46480.service - OpenSSH per-connection server daemon (10.0.0.1:46480). Jun 20 19:19:35.060198 containerd[1580]: time="2025-06-20T19:19:35.060117367Z" level=info msg="shim disconnected" id=28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48 namespace=k8s.io Jun 20 19:19:35.060198 containerd[1580]: time="2025-06-20T19:19:35.060174505Z" level=warning msg="cleaning up after shim disconnected" id=28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48 namespace=k8s.io Jun 20 19:19:35.062758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48-shm.mount: Deactivated successfully. Jun 20 19:19:35.084366 containerd[1580]: time="2025-06-20T19:19:35.060186128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:19:35.089103 containerd[1580]: time="2025-06-20T19:19:35.089011105Z" level=info msg="received exit event sandbox_id:\"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" exit_status:137 exited_at:{seconds:1750447174 nanos:803536378}" Jun 20 19:19:35.127084 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:35.131481 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:35.138185 systemd-logind[1515]: New session 12 of user core. Jun 20 19:19:35.145808 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:19:35.226571 kubelet[2735]: I0620 19:19:35.224987 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-847d7f87d-mfvld" podStartSLOduration=4.224968896 podStartE2EDuration="4.224968896s" podCreationTimestamp="2025-06-20 19:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:19:33.568516717 +0000 UTC m=+75.598798747" watchObservedRunningTime="2025-06-20 19:19:35.224968896 +0000 UTC m=+77.255250926" Jun 20 19:19:35.230488 systemd-networkd[1467]: cali87fe69c508e: Link DOWN Jun 20 19:19:35.230504 systemd-networkd[1467]: cali87fe69c508e: Lost carrier Jun 20 19:19:35.336442 sshd[5619]: Connection closed by 10.0.0.1 port 46480 Jun 20 19:19:35.336856 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.227 [INFO][5606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.228 [INFO][5606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" iface="eth0" netns="/var/run/netns/cni-6efe7cf2-3fee-f834-2221-bd51173b60f2" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.228 [INFO][5606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" iface="eth0" netns="/var/run/netns/cni-6efe7cf2-3fee-f834-2221-bd51173b60f2" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.236 [INFO][5606] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" after=7.968741ms iface="eth0" netns="/var/run/netns/cni-6efe7cf2-3fee-f834-2221-bd51173b60f2" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.236 [INFO][5606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.236 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.289 [INFO][5630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.290 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.290 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.329 [INFO][5630] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.330 [INFO][5630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.332 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:35.338938 containerd[1580]: 2025-06-20 19:19:35.335 [INFO][5606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:19:35.345582 systemd[1]: run-netns-cni\x2d6efe7cf2\x2d3fee\x2df834\x2d2221\x2dbd51173b60f2.mount: Deactivated successfully. Jun 20 19:19:35.347231 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:46480.service: Deactivated successfully. Jun 20 19:19:35.350734 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:19:35.353348 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:19:35.356027 systemd-logind[1515]: Removed session 12. Jun 20 19:19:35.359041 containerd[1580]: time="2025-06-20T19:19:35.358953216Z" level=info msg="TearDown network for sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" successfully" Jun 20 19:19:35.359041 containerd[1580]: time="2025-06-20T19:19:35.359026736Z" level=info msg="StopPodSandbox for \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" returns successfully" Jun 20 19:19:35.432144 kubelet[2735]: I0620 19:19:35.432074 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzng\" (UniqueName: \"kubernetes.io/projected/031ba079-1aa1-4e85-90ea-f180e62009e8-kube-api-access-dlzng\") pod \"031ba079-1aa1-4e85-90ea-f180e62009e8\" (UID: \"031ba079-1aa1-4e85-90ea-f180e62009e8\") " Jun 20 19:19:35.432144 kubelet[2735]: I0620 19:19:35.432126 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/031ba079-1aa1-4e85-90ea-f180e62009e8-calico-apiserver-certs\") pod \"031ba079-1aa1-4e85-90ea-f180e62009e8\" (UID: \"031ba079-1aa1-4e85-90ea-f180e62009e8\") " Jun 20 19:19:35.436840 kubelet[2735]: I0620 19:19:35.436774 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/031ba079-1aa1-4e85-90ea-f180e62009e8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "031ba079-1aa1-4e85-90ea-f180e62009e8" (UID: "031ba079-1aa1-4e85-90ea-f180e62009e8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:19:35.437012 kubelet[2735]: I0620 19:19:35.436927 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031ba079-1aa1-4e85-90ea-f180e62009e8-kube-api-access-dlzng" (OuterVolumeSpecName: "kube-api-access-dlzng") pod "031ba079-1aa1-4e85-90ea-f180e62009e8" (UID: "031ba079-1aa1-4e85-90ea-f180e62009e8"). InnerVolumeSpecName "kube-api-access-dlzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:19:35.439491 systemd[1]: var-lib-kubelet-pods-031ba079\x2d1aa1\x2d4e85\x2d90ea\x2df180e62009e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlzng.mount: Deactivated successfully. Jun 20 19:19:35.439637 systemd[1]: var-lib-kubelet-pods-031ba079\x2d1aa1\x2d4e85\x2d90ea\x2df180e62009e8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:19:35.533161 kubelet[2735]: I0620 19:19:35.533058 2735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzng\" (UniqueName: \"kubernetes.io/projected/031ba079-1aa1-4e85-90ea-f180e62009e8-kube-api-access-dlzng\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:35.533161 kubelet[2735]: I0620 19:19:35.533099 2735 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/031ba079-1aa1-4e85-90ea-f180e62009e8-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:35.794094 kubelet[2735]: I0620 19:19:35.794054 2735 scope.go:117] "RemoveContainer" containerID="23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9" Jun 20 19:19:35.797783 containerd[1580]: time="2025-06-20T19:19:35.796424384Z" level=info msg="RemoveContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\"" Jun 20 19:19:35.800884 systemd[1]: Removed slice kubepods-besteffort-pod031ba079_1aa1_4e85_90ea_f180e62009e8.slice - libcontainer container kubepods-besteffort-pod031ba079_1aa1_4e85_90ea_f180e62009e8.slice. Jun 20 19:19:35.808390 containerd[1580]: time="2025-06-20T19:19:35.808337409Z" level=info msg="RemoveContainer for \"23df460b0f4d1aab39a97f077d70c3eef1d49f17bb64a205daf2ca59f08129b9\" returns successfully" Jun 20 19:19:36.013347 kubelet[2735]: I0620 19:19:36.012659 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:19:36.014250 containerd[1580]: time="2025-06-20T19:19:36.014149756Z" level=info msg="StopContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" with timeout 30 (s)" Jun 20 19:19:36.016790 containerd[1580]: time="2025-06-20T19:19:36.016697956Z" level=info msg="Stop container \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" with signal terminated" Jun 20 19:19:36.040805 systemd[1]: cri-containerd-9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c.scope: Deactivated successfully. Jun 20 19:19:36.044019 containerd[1580]: time="2025-06-20T19:19:36.043960891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" id:\"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" pid:5088 exit_status:1 exited_at:{seconds:1750447176 nanos:43534872}" Jun 20 19:19:36.044400 containerd[1580]: time="2025-06-20T19:19:36.044186449Z" level=info msg="received exit event container_id:\"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" id:\"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" pid:5088 exit_status:1 exited_at:{seconds:1750447176 nanos:43534872}" Jun 20 19:19:36.072013 kubelet[2735]: I0620 19:19:36.071955 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031ba079-1aa1-4e85-90ea-f180e62009e8" path="/var/lib/kubelet/pods/031ba079-1aa1-4e85-90ea-f180e62009e8/volumes" Jun 20 19:19:36.078396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c-rootfs.mount: Deactivated successfully. Jun 20 19:19:36.839034 containerd[1580]: time="2025-06-20T19:19:36.838981150Z" level=info msg="StopContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" returns successfully" Jun 20 19:19:36.839910 containerd[1580]: time="2025-06-20T19:19:36.839361583Z" level=info msg="StopPodSandbox for \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\"" Jun 20 19:19:36.839910 containerd[1580]: time="2025-06-20T19:19:36.839424924Z" level=info msg="Container to stop \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:19:36.848396 systemd[1]: cri-containerd-131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44.scope: Deactivated successfully. Jun 20 19:19:36.849598 containerd[1580]: time="2025-06-20T19:19:36.849511032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" id:\"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" pid:4398 exit_status:137 exited_at:{seconds:1750447176 nanos:849114719}" Jun 20 19:19:36.881026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44-rootfs.mount: Deactivated successfully. Jun 20 19:19:37.318673 containerd[1580]: time="2025-06-20T19:19:37.318513489Z" level=info msg="shim disconnected" id=131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44 namespace=k8s.io Jun 20 19:19:37.318673 containerd[1580]: time="2025-06-20T19:19:37.318552874Z" level=warning msg="cleaning up after shim disconnected" id=131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44 namespace=k8s.io Jun 20 19:19:37.318673 containerd[1580]: time="2025-06-20T19:19:37.318562542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:19:37.335181 containerd[1580]: time="2025-06-20T19:19:37.332798966Z" level=info msg="received exit event sandbox_id:\"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" exit_status:137 exited_at:{seconds:1750447176 nanos:849114719}" Jun 20 19:19:37.336064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44-shm.mount: Deactivated successfully. Jun 20 19:19:37.546410 containerd[1580]: time="2025-06-20T19:19:37.546302224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:37.673465 containerd[1580]: time="2025-06-20T19:19:37.673289975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:19:37.770536 systemd-networkd[1467]: calid418acbe18d: Link DOWN Jun 20 19:19:37.770549 systemd-networkd[1467]: calid418acbe18d: Lost carrier Jun 20 19:19:37.781816 containerd[1580]: time="2025-06-20T19:19:37.781745915Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:37.845497 containerd[1580]: time="2025-06-20T19:19:37.845357329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:37.846216 containerd[1580]: time="2025-06-20T19:19:37.846160685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 5.007581364s" Jun 20 19:19:37.846479 containerd[1580]: time="2025-06-20T19:19:37.846459613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:19:37.849482 kubelet[2735]: I0620 19:19:37.848922 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:19:37.850539 containerd[1580]: time="2025-06-20T19:19:37.850495696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:19:37.852080 containerd[1580]: time="2025-06-20T19:19:37.852057743Z" level=info msg="CreateContainer within sandbox \"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:19:38.157264 containerd[1580]: time="2025-06-20T19:19:38.157208444Z" level=info msg="Container 3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.767 [INFO][5730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.768 [INFO][5730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" iface="eth0" netns="/var/run/netns/cni-14fbc6ce-2a94-e5bc-001a-7843088896f0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.769 [INFO][5730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" iface="eth0" netns="/var/run/netns/cni-14fbc6ce-2a94-e5bc-001a-7843088896f0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.777 [INFO][5730] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" after=8.399009ms iface="eth0" netns="/var/run/netns/cni-14fbc6ce-2a94-e5bc-001a-7843088896f0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.777 [INFO][5730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.777 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.804 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.804 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:37.805 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:38.316 [INFO][5747] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:38.316 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:38.319 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:19:38.330245 containerd[1580]: 2025-06-20 19:19:38.324 [INFO][5730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:19:38.333385 containerd[1580]: time="2025-06-20T19:19:38.331291105Z" level=info msg="TearDown network for sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" successfully" Jun 20 19:19:38.333385 containerd[1580]: time="2025-06-20T19:19:38.333379501Z" level=info msg="StopPodSandbox for \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" returns successfully" Jun 20 19:19:38.334695 systemd[1]: run-netns-cni\x2d14fbc6ce\x2d2a94\x2de5bc\x2d001a\x2d7843088896f0.mount: Deactivated successfully. Jun 20 19:19:38.339118 containerd[1580]: time="2025-06-20T19:19:38.338982567Z" level=info msg="CreateContainer within sandbox \"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2\"" Jun 20 19:19:38.340092 containerd[1580]: time="2025-06-20T19:19:38.340035145Z" level=info msg="StartContainer for \"3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2\"" Jun 20 19:19:38.342939 containerd[1580]: time="2025-06-20T19:19:38.342899965Z" level=info msg="connecting to shim 3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2" address="unix:///run/containerd/s/ff8ac1136328ab53e9d613201a0f507d457a06abf04778d07fd6c94c989e1f37" protocol=ttrpc version=3 Jun 20 19:19:38.354944 kubelet[2735]: I0620 19:19:38.354373 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfrz9\" (UniqueName: \"kubernetes.io/projected/ea3da14f-e857-457d-b1b7-a4caf7621c08-kube-api-access-mfrz9\") pod \"ea3da14f-e857-457d-b1b7-a4caf7621c08\" (UID: \"ea3da14f-e857-457d-b1b7-a4caf7621c08\") " Jun 20 19:19:38.354944 kubelet[2735]: I0620 19:19:38.354463 2735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea3da14f-e857-457d-b1b7-a4caf7621c08-calico-apiserver-certs\") pod \"ea3da14f-e857-457d-b1b7-a4caf7621c08\" (UID: \"ea3da14f-e857-457d-b1b7-a4caf7621c08\") " Jun 20 19:19:38.363889 systemd[1]: var-lib-kubelet-pods-ea3da14f\x2de857\x2d457d\x2db1b7\x2da4caf7621c08-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmfrz9.mount: Deactivated successfully. Jun 20 19:19:38.364902 systemd[1]: var-lib-kubelet-pods-ea3da14f\x2de857\x2d457d\x2db1b7\x2da4caf7621c08-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:19:38.365455 kubelet[2735]: I0620 19:19:38.364951 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3da14f-e857-457d-b1b7-a4caf7621c08-kube-api-access-mfrz9" (OuterVolumeSpecName: "kube-api-access-mfrz9") pod "ea3da14f-e857-457d-b1b7-a4caf7621c08" (UID: "ea3da14f-e857-457d-b1b7-a4caf7621c08"). InnerVolumeSpecName "kube-api-access-mfrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:19:38.368343 kubelet[2735]: I0620 19:19:38.366531 2735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3da14f-e857-457d-b1b7-a4caf7621c08-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ea3da14f-e857-457d-b1b7-a4caf7621c08" (UID: "ea3da14f-e857-457d-b1b7-a4caf7621c08"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:19:38.389134 systemd[1]: Started cri-containerd-3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2.scope - libcontainer container 3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2. Jun 20 19:19:38.455457 kubelet[2735]: I0620 19:19:38.455059 2735 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea3da14f-e857-457d-b1b7-a4caf7621c08-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:38.455457 kubelet[2735]: I0620 19:19:38.455129 2735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfrz9\" (UniqueName: \"kubernetes.io/projected/ea3da14f-e857-457d-b1b7-a4caf7621c08-kube-api-access-mfrz9\") on node \"localhost\" DevicePath \"\"" Jun 20 19:19:38.465949 containerd[1580]: time="2025-06-20T19:19:38.465470024Z" level=info msg="StartContainer for \"3e27f807f717807d4fd4b6d522a396d663cc3ad16a688cfda18fbf980a5beec2\" returns successfully" Jun 20 19:19:38.864251 systemd[1]: Removed slice kubepods-besteffort-podea3da14f_e857_457d_b1b7_a4caf7621c08.slice - libcontainer container kubepods-besteffort-podea3da14f_e857_457d_b1b7_a4caf7621c08.slice. Jun 20 19:19:40.084980 kubelet[2735]: I0620 19:19:40.084923 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3da14f-e857-457d-b1b7-a4caf7621c08" path="/var/lib/kubelet/pods/ea3da14f-e857-457d-b1b7-a4caf7621c08/volumes" Jun 20 19:19:40.356471 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:46482.service - OpenSSH per-connection server daemon (10.0.0.1:46482). Jun 20 19:19:40.442183 sshd[5796]: Accepted publickey for core from 10.0.0.1 port 46482 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:40.446724 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:40.460391 systemd-logind[1515]: New session 13 of user core. Jun 20 19:19:40.465621 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:19:40.648202 sshd[5798]: Connection closed by 10.0.0.1 port 46482 Jun 20 19:19:40.650173 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:40.661462 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:46482.service: Deactivated successfully. Jun 20 19:19:40.665790 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:19:40.671552 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:19:40.675512 systemd-logind[1515]: Removed session 13. Jun 20 19:19:40.677626 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:46496.service - OpenSSH per-connection server daemon (10.0.0.1:46496). Jun 20 19:19:40.739504 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 46496 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:40.743173 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:40.749829 systemd-logind[1515]: New session 14 of user core. Jun 20 19:19:40.757523 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:19:40.982982 sshd[5815]: Connection closed by 10.0.0.1 port 46496 Jun 20 19:19:40.984066 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:41.000672 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:46496.service: Deactivated successfully. Jun 20 19:19:41.005121 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:19:41.007259 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:19:41.010183 systemd-logind[1515]: Removed session 14. Jun 20 19:19:41.013093 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:46500.service - OpenSSH per-connection server daemon (10.0.0.1:46500). Jun 20 19:19:41.032484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630739376.mount: Deactivated successfully. Jun 20 19:19:41.074802 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 46500 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:41.077089 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:41.083284 systemd-logind[1515]: New session 15 of user core. Jun 20 19:19:41.100655 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:19:41.233985 sshd[5828]: Connection closed by 10.0.0.1 port 46500 Jun 20 19:19:41.234305 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:41.240248 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:46500.service: Deactivated successfully. Jun 20 19:19:41.243179 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:19:41.244213 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:19:41.246390 systemd-logind[1515]: Removed session 15. Jun 20 19:19:42.426134 containerd[1580]: time="2025-06-20T19:19:42.426039671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:42.427912 containerd[1580]: time="2025-06-20T19:19:42.427865734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:19:42.429740 containerd[1580]: time="2025-06-20T19:19:42.429673653Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:42.432407 containerd[1580]: time="2025-06-20T19:19:42.432371519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:42.433553 containerd[1580]: time="2025-06-20T19:19:42.433487906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 4.582821245s" Jun 20 19:19:42.433553 containerd[1580]: time="2025-06-20T19:19:42.433543722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:19:42.434874 containerd[1580]: time="2025-06-20T19:19:42.434819902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:19:42.438566 containerd[1580]: time="2025-06-20T19:19:42.437185629Z" level=info msg="CreateContainer within sandbox \"ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:19:42.450693 containerd[1580]: time="2025-06-20T19:19:42.450606782Z" level=info msg="Container 8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:42.461953 containerd[1580]: time="2025-06-20T19:19:42.461907275Z" level=info msg="CreateContainer within sandbox \"ab9df286e73b9658dd80af1c70251951090bf2a02d491e5fb57578bcec3faa2c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\"" Jun 20 19:19:42.463181 containerd[1580]: time="2025-06-20T19:19:42.463119834Z" level=info msg="StartContainer for \"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\"" Jun 20 19:19:42.465287 containerd[1580]: time="2025-06-20T19:19:42.465237750Z" level=info msg="connecting to shim 8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66" address="unix:///run/containerd/s/7d6621eb1b58a731d3061461c052f54b9e309066a30b96d30c081299277a1037" protocol=ttrpc version=3 Jun 20 19:19:42.493533 systemd[1]: Started cri-containerd-8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66.scope - libcontainer container 8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66. Jun 20 19:19:42.557024 containerd[1580]: time="2025-06-20T19:19:42.556959594Z" level=info msg="StartContainer for \"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" returns successfully" Jun 20 19:19:42.959132 containerd[1580]: time="2025-06-20T19:19:42.959077101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" id:\"0ec4c3c828f75d83ccc338bb5b00e3574da02d70cc90307adf9fd94d3de419f3\" pid:5897 exit_status:1 exited_at:{seconds:1750447182 nanos:958635123}" Jun 20 19:19:43.971298 containerd[1580]: time="2025-06-20T19:19:43.971236215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" id:\"19ede4667d155c48a353100a129940ca2eaac25eede975d63ca29891bba902a8\" pid:5923 exit_status:1 exited_at:{seconds:1750447183 nanos:970825066}" Jun 20 19:19:44.610272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929725692.mount: Deactivated successfully. Jun 20 19:19:45.580397 containerd[1580]: time="2025-06-20T19:19:45.580189133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:45.581436 containerd[1580]: time="2025-06-20T19:19:45.581373107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:19:45.601023 containerd[1580]: time="2025-06-20T19:19:45.600932428Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:45.601522 containerd[1580]: time="2025-06-20T19:19:45.601484102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 3.166607102s" Jun 20 19:19:45.601627 containerd[1580]: time="2025-06-20T19:19:45.601528266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:19:45.602448 containerd[1580]: time="2025-06-20T19:19:45.602397033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:45.604661 containerd[1580]: time="2025-06-20T19:19:45.604256798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:19:45.605572 containerd[1580]: time="2025-06-20T19:19:45.605516124Z" level=info msg="CreateContainer within sandbox \"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:19:45.829906 containerd[1580]: time="2025-06-20T19:19:45.829109561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" id:\"8fc337f17491572dff5b59cab5a26405be16437f47b1e01bab260951fe2215ac\" pid:5954 exited_at:{seconds:1750447185 nanos:828233020}" Jun 20 19:19:45.869737 containerd[1580]: time="2025-06-20T19:19:45.868533700Z" level=info msg="Container b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:46.247778 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:48120.service - OpenSSH per-connection server daemon (10.0.0.1:48120). Jun 20 19:19:46.936414 containerd[1580]: time="2025-06-20T19:19:46.936352642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" id:\"cbb72e5e723253ad344a9928095fd96ac188097e1a38fe4639174e4d9dc6b4be\" pid:5977 exited_at:{seconds:1750447186 nanos:935805124}" Jun 20 19:19:47.105496 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 48120 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:47.107380 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:47.115375 kubelet[2735]: I0620 19:19:47.115244 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-dc7b455cb-p79lm" podStartSLOduration=57.800768654 podStartE2EDuration="1m9.115225888s" podCreationTimestamp="2025-06-20 19:18:38 +0000 UTC" firstStartedPulling="2025-06-20 19:19:31.120098076 +0000 UTC m=+73.150380106" lastFinishedPulling="2025-06-20 19:19:42.43455531 +0000 UTC m=+84.464837340" observedRunningTime="2025-06-20 19:19:42.88726183 +0000 UTC m=+84.917543861" watchObservedRunningTime="2025-06-20 19:19:47.115225888 +0000 UTC m=+89.145507908" Jun 20 19:19:47.116130 systemd-logind[1515]: New session 16 of user core. Jun 20 19:19:47.126454 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:19:47.605733 sshd[5994]: Connection closed by 10.0.0.1 port 48120 Jun 20 19:19:47.606103 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:47.610493 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:48120.service: Deactivated successfully. Jun 20 19:19:47.612726 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:19:47.613679 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:19:47.614982 systemd-logind[1515]: Removed session 16. Jun 20 19:19:48.612899 containerd[1580]: time="2025-06-20T19:19:48.612826401Z" level=info msg="CreateContainer within sandbox \"19a82e8b99096cb2a4ffb88a25886e654973c4730e20c1fa168935aff20a46b7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5\"" Jun 20 19:19:48.613783 containerd[1580]: time="2025-06-20T19:19:48.613733028Z" level=info msg="StartContainer for \"b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5\"" Jun 20 19:19:48.614954 containerd[1580]: time="2025-06-20T19:19:48.614922341Z" level=info msg="connecting to shim b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5" address="unix:///run/containerd/s/fc9a4388df588c7e26a349cb83badb4ab8b4ab75e27cc67d31e7847e5e3e2621" protocol=ttrpc version=3 Jun 20 19:19:48.691470 systemd[1]: Started cri-containerd-b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5.scope - libcontainer container b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5. Jun 20 19:19:49.265643 containerd[1580]: time="2025-06-20T19:19:49.265597664Z" level=info msg="StartContainer for \"b4010d4baf53087eb671dac6729bc5f81114532de971e1d4c048960a296d50c5\" returns successfully" Jun 20 19:19:50.068535 kubelet[2735]: E0620 19:19:50.068472 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:50.866942 kubelet[2735]: I0620 19:19:50.866611 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6788ff7d59-b7tj2" podStartSLOduration=6.679717278 podStartE2EDuration="35.866595311s" podCreationTimestamp="2025-06-20 19:19:15 +0000 UTC" firstStartedPulling="2025-06-20 19:19:16.417203021 +0000 UTC m=+58.447485051" lastFinishedPulling="2025-06-20 19:19:45.604081024 +0000 UTC m=+87.634363084" observedRunningTime="2025-06-20 19:19:50.866242784 +0000 UTC m=+92.896524814" watchObservedRunningTime="2025-06-20 19:19:50.866595311 +0000 UTC m=+92.896877341" Jun 20 19:19:52.157342 containerd[1580]: time="2025-06-20T19:19:52.157271016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:52.158362 containerd[1580]: time="2025-06-20T19:19:52.158330791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:19:52.159628 containerd[1580]: time="2025-06-20T19:19:52.159566320Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:52.161740 containerd[1580]: time="2025-06-20T19:19:52.161696451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:19:52.162402 containerd[1580]: time="2025-06-20T19:19:52.162354538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 6.55806602s" Jun 20 19:19:52.162498 containerd[1580]: time="2025-06-20T19:19:52.162455839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:19:52.164769 containerd[1580]: time="2025-06-20T19:19:52.164733870Z" level=info msg="CreateContainer within sandbox \"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:19:52.175981 containerd[1580]: time="2025-06-20T19:19:52.174915552Z" level=info msg="Container ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:19:52.186656 containerd[1580]: time="2025-06-20T19:19:52.186606159Z" level=info msg="CreateContainer within sandbox \"fd02f5807fd57c69d1952b3636a0f26e25409e19d5f938846ff0ec65d367cdcc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c\"" Jun 20 19:19:52.187592 containerd[1580]: time="2025-06-20T19:19:52.187344317Z" level=info msg="StartContainer for \"ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c\"" Jun 20 19:19:52.189029 containerd[1580]: time="2025-06-20T19:19:52.189000561Z" level=info msg="connecting to shim ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c" address="unix:///run/containerd/s/ff8ac1136328ab53e9d613201a0f507d457a06abf04778d07fd6c94c989e1f37" protocol=ttrpc version=3 Jun 20 19:19:52.237960 systemd[1]: Started cri-containerd-ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c.scope - libcontainer container ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c. Jun 20 19:19:52.300092 containerd[1580]: time="2025-06-20T19:19:52.300033205Z" level=info msg="StartContainer for \"ec2f61322f216fce410d096f540449ef6885e21f0ace036978a13b0b222bd71c\" returns successfully" Jun 20 19:19:52.625070 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:48132.service - OpenSSH per-connection server daemon (10.0.0.1:48132). Jun 20 19:19:52.707960 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 48132 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:52.709913 sshd-session[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:52.714947 systemd-logind[1515]: New session 17 of user core. Jun 20 19:19:52.726461 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:19:52.999755 sshd[6089]: Connection closed by 10.0.0.1 port 48132 Jun 20 19:19:53.000072 sshd-session[6087]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:53.004953 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:48132.service: Deactivated successfully. Jun 20 19:19:53.007414 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:19:53.010204 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:19:53.011895 systemd-logind[1515]: Removed session 17. Jun 20 19:19:53.069140 kubelet[2735]: E0620 19:19:53.069099 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:19:53.272102 kubelet[2735]: I0620 19:19:53.272041 2735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:19:53.272102 kubelet[2735]: I0620 19:19:53.272082 2735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:19:53.455783 kubelet[2735]: I0620 19:19:53.455630 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6jjgb" podStartSLOduration=52.502155228 podStartE2EDuration="1m14.455612587s" podCreationTimestamp="2025-06-20 19:18:39 +0000 UTC" firstStartedPulling="2025-06-20 19:19:30.209820175 +0000 UTC m=+72.240102205" lastFinishedPulling="2025-06-20 19:19:52.163277534 +0000 UTC m=+94.193559564" observedRunningTime="2025-06-20 19:19:53.455255141 +0000 UTC m=+95.485537171" watchObservedRunningTime="2025-06-20 19:19:53.455612587 +0000 UTC m=+95.485894617" Jun 20 19:19:58.013261 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Jun 20 19:19:58.068923 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 44532 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:19:58.070876 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:19:58.075833 systemd-logind[1515]: New session 18 of user core. Jun 20 19:19:58.089472 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:19:58.226622 sshd[6115]: Connection closed by 10.0.0.1 port 44532 Jun 20 19:19:58.227106 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Jun 20 19:19:58.232439 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:44532.service: Deactivated successfully. Jun 20 19:19:58.235073 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:19:58.236282 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:19:58.238327 systemd-logind[1515]: Removed session 18. Jun 20 19:20:00.521733 containerd[1580]: time="2025-06-20T19:20:00.521511158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" id:\"a4e435c165ed41c4c7e73257dcbf2605107cb0311a27b19a64b7f24b06972552\" pid:6141 exited_at:{seconds:1750447200 nanos:521135958}" Jun 20 19:20:00.865505 containerd[1580]: time="2025-06-20T19:20:00.865338604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" id:\"dc355850309d056b157d86538f1f2daf406a7a63109fe1609bc100c194415906\" pid:6163 exited_at:{seconds:1750447200 nanos:864981349}" Jun 20 19:20:03.240257 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:44542.service - OpenSSH per-connection server daemon (10.0.0.1:44542). Jun 20 19:20:03.421885 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 44542 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:03.423848 sshd-session[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:03.431412 systemd-logind[1515]: New session 19 of user core. Jun 20 19:20:03.437635 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:20:03.641899 sshd[6180]: Connection closed by 10.0.0.1 port 44542 Jun 20 19:20:03.642295 sshd-session[6178]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:03.656431 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:44542.service: Deactivated successfully. Jun 20 19:20:03.658966 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:20:03.660152 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:20:03.664481 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:47024.service - OpenSSH per-connection server daemon (10.0.0.1:47024). Jun 20 19:20:03.665599 systemd-logind[1515]: Removed session 19. Jun 20 19:20:03.728989 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 47024 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:03.730860 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:03.736108 systemd-logind[1515]: New session 20 of user core. Jun 20 19:20:03.743465 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:20:04.148561 sshd[6196]: Connection closed by 10.0.0.1 port 47024 Jun 20 19:20:04.148990 sshd-session[6194]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:04.165033 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:47024.service: Deactivated successfully. Jun 20 19:20:04.167734 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:20:04.169041 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:20:04.172936 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:47032.service - OpenSSH per-connection server daemon (10.0.0.1:47032). Jun 20 19:20:04.174220 systemd-logind[1515]: Removed session 20. Jun 20 19:20:04.267706 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 47032 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:04.269881 sshd-session[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:04.275889 systemd-logind[1515]: New session 21 of user core. Jun 20 19:20:04.283481 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:20:07.068264 kubelet[2735]: E0620 19:20:07.068201 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:20:07.269292 sshd[6209]: Connection closed by 10.0.0.1 port 47032 Jun 20 19:20:07.270174 sshd-session[6207]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:07.282449 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:47042.service - OpenSSH per-connection server daemon (10.0.0.1:47042). Jun 20 19:20:07.283407 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:47032.service: Deactivated successfully. Jun 20 19:20:07.287166 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:20:07.287619 systemd[1]: session-21.scope: Consumed 709ms CPU time, 73.1M memory peak. Jun 20 19:20:07.289421 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:20:07.296672 systemd-logind[1515]: Removed session 21. Jun 20 19:20:07.341469 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 47042 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:07.343730 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:07.350036 systemd-logind[1515]: New session 22 of user core. Jun 20 19:20:07.356511 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:20:08.049707 sshd[6250]: Connection closed by 10.0.0.1 port 47042 Jun 20 19:20:08.050690 sshd-session[6245]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:08.064124 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:47042.service: Deactivated successfully. Jun 20 19:20:08.068518 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:20:08.072029 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:20:08.077399 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:47058.service - OpenSSH per-connection server daemon (10.0.0.1:47058). Jun 20 19:20:08.079232 systemd-logind[1515]: Removed session 22. Jun 20 19:20:08.159754 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 47058 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:08.162129 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:08.168521 systemd-logind[1515]: New session 23 of user core. Jun 20 19:20:08.180662 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:20:08.335613 sshd[6264]: Connection closed by 10.0.0.1 port 47058 Jun 20 19:20:08.335695 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:08.341104 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:47058.service: Deactivated successfully. Jun 20 19:20:08.344112 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:20:08.345106 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:20:08.347057 systemd-logind[1515]: Removed session 23. Jun 20 19:20:13.348834 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:47062.service - OpenSSH per-connection server daemon (10.0.0.1:47062). Jun 20 19:20:13.396334 sshd[6277]: Accepted publickey for core from 10.0.0.1 port 47062 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:13.398188 sshd-session[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:13.402769 systemd-logind[1515]: New session 24 of user core. Jun 20 19:20:13.413489 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:20:13.529616 sshd[6279]: Connection closed by 10.0.0.1 port 47062 Jun 20 19:20:13.529959 sshd-session[6277]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:13.534434 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:47062.service: Deactivated successfully. Jun 20 19:20:13.537194 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:20:13.538374 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:20:13.540178 systemd-logind[1515]: Removed session 24. Jun 20 19:20:15.991899 containerd[1580]: time="2025-06-20T19:20:15.991837489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6231958d836144c850a2b1446750fd8bb97505e135fb288109436d3cfa9ac82\" id:\"d44abddbb823df12517d437ee03c832b0339e3cd19d5d3c83f18b287764337fb\" pid:6303 exited_at:{seconds:1750447215 nanos:991533705}" Jun 20 19:20:17.901300 containerd[1580]: time="2025-06-20T19:20:17.901254414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" id:\"39583b87b366c77965a8414ce07fb2071fff7d7dd5796f09b2c3df8cee4385af\" pid:6332 exited_at:{seconds:1750447217 nanos:901063063}" Jun 20 19:20:18.055906 kubelet[2735]: I0620 19:20:18.055860 2735 scope.go:117] "RemoveContainer" containerID="9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c" Jun 20 19:20:18.072908 containerd[1580]: time="2025-06-20T19:20:18.072860701Z" level=info msg="RemoveContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\"" Jun 20 19:20:18.090850 containerd[1580]: time="2025-06-20T19:20:18.090760969Z" level=info msg="RemoveContainer for \"9b9023be986767c3e0a289146028eb68b0cba66c4dfe6742b1390d9cd7fab06c\" returns successfully" Jun 20 19:20:18.092558 containerd[1580]: time="2025-06-20T19:20:18.092528254Z" level=info msg="StopPodSandbox for \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\"" Jun 20 19:20:18.547208 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:46032.service - OpenSSH per-connection server daemon (10.0.0.1:46032). Jun 20 19:20:18.639511 sshd[6368]: Accepted publickey for core from 10.0.0.1 port 46032 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:18.641899 sshd-session[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:18.647690 systemd-logind[1515]: New session 25 of user core. Jun 20 19:20:18.655813 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.451 [WARNING][6354] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.453 [INFO][6354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.453 [INFO][6354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" iface="eth0" netns="" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.453 [INFO][6354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.453 [INFO][6354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.738 [INFO][6362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.740 [INFO][6362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.740 [INFO][6362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.748 [WARNING][6362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.748 [INFO][6362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.750 [INFO][6362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:20:18.759147 containerd[1580]: 2025-06-20 19:20:18.756 [INFO][6354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.770862 containerd[1580]: time="2025-06-20T19:20:18.770800791Z" level=info msg="TearDown network for sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" successfully" Jun 20 19:20:18.771073 containerd[1580]: time="2025-06-20T19:20:18.771056834Z" level=info msg="StopPodSandbox for \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" returns successfully" Jun 20 19:20:18.800752 containerd[1580]: time="2025-06-20T19:20:18.800603510Z" level=info msg="RemovePodSandbox for \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\"" Jun 20 19:20:18.810321 containerd[1580]: time="2025-06-20T19:20:18.810048202Z" level=info msg="Forcibly stopping sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\"" Jun 20 19:20:18.839200 sshd[6370]: Connection closed by 10.0.0.1 port 46032 Jun 20 19:20:18.840113 sshd-session[6368]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:18.846420 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:46032.service: Deactivated successfully. Jun 20 19:20:18.848966 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:20:18.850899 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:20:18.853685 systemd-logind[1515]: Removed session 25. Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.871 [WARNING][6393] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.871 [INFO][6393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.871 [INFO][6393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" iface="eth0" netns="" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.871 [INFO][6393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.871 [INFO][6393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.896 [INFO][6404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.897 [INFO][6404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.897 [INFO][6404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.904 [WARNING][6404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.904 [INFO][6404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" HandleID="k8s-pod-network.131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Workload="localhost-k8s-calico--apiserver--699c44cbf4--xj2bn-eth0" Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.906 [INFO][6404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:20:18.912485 containerd[1580]: 2025-06-20 19:20:18.908 [INFO][6393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44" Jun 20 19:20:18.913393 containerd[1580]: time="2025-06-20T19:20:18.912526329Z" level=info msg="TearDown network for sandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" successfully" Jun 20 19:20:18.945163 containerd[1580]: time="2025-06-20T19:20:18.945084216Z" level=info msg="Ensure that sandbox 131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44 in task-service has been cleanup successfully" Jun 20 19:20:19.081108 containerd[1580]: time="2025-06-20T19:20:19.080916873Z" level=info msg="RemovePodSandbox \"131e0cd71d8d686a5ac058632de2a2608a370255817fc4a233658aaabed74d44\" returns successfully" Jun 20 19:20:19.089727 containerd[1580]: time="2025-06-20T19:20:19.089671391Z" level=info msg="StopPodSandbox for \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\"" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.131 [WARNING][6421] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.132 [INFO][6421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.132 [INFO][6421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" iface="eth0" netns="" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.132 [INFO][6421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.132 [INFO][6421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.158 [INFO][6430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.159 [INFO][6430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.159 [INFO][6430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.167 [WARNING][6430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.167 [INFO][6430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.169 [INFO][6430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:20:19.175412 containerd[1580]: 2025-06-20 19:20:19.172 [INFO][6421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.175877 containerd[1580]: time="2025-06-20T19:20:19.175472016Z" level=info msg="TearDown network for sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" successfully" Jun 20 19:20:19.175877 containerd[1580]: time="2025-06-20T19:20:19.175513714Z" level=info msg="StopPodSandbox for \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" returns successfully" Jun 20 19:20:19.176894 containerd[1580]: time="2025-06-20T19:20:19.176846249Z" level=info msg="RemovePodSandbox for \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\"" Jun 20 19:20:19.176946 containerd[1580]: time="2025-06-20T19:20:19.176902505Z" level=info msg="Forcibly stopping sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\"" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.218 [WARNING][6449] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" WorkloadEndpoint="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.219 [INFO][6449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.219 [INFO][6449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" iface="eth0" netns="" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.219 [INFO][6449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.219 [INFO][6449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.265 [INFO][6458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.271 [INFO][6458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.273 [INFO][6458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.280 [WARNING][6458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.281 [INFO][6458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" HandleID="k8s-pod-network.28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Workload="localhost-k8s-calico--apiserver--699c44cbf4--2kwq5-eth0" Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.282 [INFO][6458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:20:19.291365 containerd[1580]: 2025-06-20 19:20:19.285 [INFO][6449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48" Jun 20 19:20:19.291365 containerd[1580]: time="2025-06-20T19:20:19.289013073Z" level=info msg="TearDown network for sandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" successfully" Jun 20 19:20:19.292190 containerd[1580]: time="2025-06-20T19:20:19.292132610Z" level=info msg="Ensure that sandbox 28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48 in task-service has been cleanup successfully" Jun 20 19:20:19.295987 containerd[1580]: time="2025-06-20T19:20:19.295951646Z" level=info msg="RemovePodSandbox \"28f9f8e17da9f0f68332c583ff89fd03130747a0da4ada3ec06f15d208e9db48\" returns successfully" Jun 20 19:20:23.856274 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:53470.service - OpenSSH per-connection server daemon (10.0.0.1:53470). Jun 20 19:20:23.910909 sshd[6467]: Accepted publickey for core from 10.0.0.1 port 53470 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:23.912927 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:23.918612 systemd-logind[1515]: New session 26 of user core. Jun 20 19:20:23.925577 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:20:24.069720 kubelet[2735]: E0620 19:20:24.069666 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:20:24.108797 sshd[6469]: Connection closed by 10.0.0.1 port 53470 Jun 20 19:20:24.109089 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:24.114673 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:53470.service: Deactivated successfully. Jun 20 19:20:24.117347 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:20:24.118548 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:20:24.120903 systemd-logind[1515]: Removed session 26. Jun 20 19:20:29.128741 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). Jun 20 19:20:29.192627 sshd[6484]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:29.194811 sshd-session[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:29.201247 systemd-logind[1515]: New session 27 of user core. Jun 20 19:20:29.218651 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:20:29.352935 sshd[6487]: Connection closed by 10.0.0.1 port 53472 Jun 20 19:20:29.353478 sshd-session[6484]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:29.358547 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:53472.service: Deactivated successfully. Jun 20 19:20:29.360798 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:20:29.361801 systemd-logind[1515]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:20:29.364019 systemd-logind[1515]: Removed session 27. Jun 20 19:20:30.535207 containerd[1580]: time="2025-06-20T19:20:30.535142376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0f7ee5e4dca25cb9b4adfa21b9a140296c06ce37e56e72d92d135708bc81664\" id:\"4379997a3a8f123d7c8edf4cc2688bc091c9141d6cf4431087fa2b4425f85978\" pid:6511 exited_at:{seconds:1750447230 nanos:533958724}" Jun 20 19:20:30.882626 containerd[1580]: time="2025-06-20T19:20:30.882488482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d7d65440b42929f7e8043d3ae9a9012a323eb004fac7b3965ac9c67b56f3e66\" id:\"a9cfbf1e321b49acb139ab28c231d927147b5f729dd7e82e9cd0ac0ef617cbf6\" pid:6534 exited_at:{seconds:1750447230 nanos:882087776}" Jun 20 19:20:34.068631 kubelet[2735]: E0620 19:20:34.068593 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:20:34.069135 kubelet[2735]: E0620 19:20:34.068742 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:20:34.370027 systemd[1]: Started sshd@27-10.0.0.38:22-10.0.0.1:53136.service - OpenSSH per-connection server daemon (10.0.0.1:53136). Jun 20 19:20:34.436244 sshd[6547]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:YoQ4GiRtY5Hu7FaS/OnNYeCnsR+r8YS2g6Qh7XD/NPE Jun 20 19:20:34.438153 sshd-session[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:20:34.443972 systemd-logind[1515]: New session 28 of user core. Jun 20 19:20:34.453745 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:20:34.598757 sshd[6550]: Connection closed by 10.0.0.1 port 53136 Jun 20 19:20:34.599116 sshd-session[6547]: pam_unix(sshd:session): session closed for user core Jun 20 19:20:34.604103 systemd[1]: sshd@27-10.0.0.38:22-10.0.0.1:53136.service: Deactivated successfully. Jun 20 19:20:34.606689 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:20:34.607759 systemd-logind[1515]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:20:34.609539 systemd-logind[1515]: Removed session 28.