May 15 12:12:57.848141 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:12:57.848171 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:12:57.848183 kernel: BIOS-provided physical RAM map: May 15 12:12:57.848191 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 12:12:57.848200 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 12:12:57.848208 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:12:57.848219 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 12:12:57.848230 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 12:12:57.848239 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:12:57.848247 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:12:57.848256 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:12:57.848264 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:12:57.848273 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:12:57.848282 kernel: NX (Execute Disable) protection: active May 15 12:12:57.848296 kernel: APIC: Static calls initialized May 15 12:12:57.848305 kernel: SMBIOS 2.8 present. May 15 12:12:57.848315 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 12:12:57.848324 kernel: DMI: Memory slots populated: 1/1 May 15 12:12:57.848333 kernel: Hypervisor detected: KVM May 15 12:12:57.848342 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:12:57.848351 kernel: kvm-clock: using sched offset of 3421055041 cycles May 15 12:12:57.848361 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:12:57.848371 kernel: tsc: Detected 2794.748 MHz processor May 15 12:12:57.848383 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:12:57.848393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:12:57.848403 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 12:12:57.848413 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:12:57.848423 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:12:57.848432 kernel: Using GB pages for direct mapping May 15 12:12:57.848442 kernel: ACPI: Early table checksum verification disabled May 15 12:12:57.848451 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 12:12:57.848461 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848484 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848494 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848504 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 12:12:57.848514 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848523 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848532 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848542 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:12:57.848567 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 12:12:57.848586 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 12:12:57.848596 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 12:12:57.848606 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 12:12:57.848616 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 12:12:57.848626 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 12:12:57.848636 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 12:12:57.848649 kernel: No NUMA configuration found May 15 12:12:57.848659 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 12:12:57.848669 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 15 12:12:57.848679 kernel: Zone ranges: May 15 12:12:57.848689 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:12:57.848699 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 12:12:57.848709 kernel: Normal empty May 15 12:12:57.848719 kernel: Device empty May 15 12:12:57.848729 kernel: Movable zone start for each node May 15 12:12:57.848738 kernel: Early memory node ranges May 15 12:12:57.848752 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:12:57.848762 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 12:12:57.848771 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 12:12:57.848781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:12:57.848791 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:12:57.848801 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 12:12:57.848811 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:12:57.848820 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:12:57.848830 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:12:57.848842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:12:57.848852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:12:57.848862 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:12:57.848872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:12:57.848882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:12:57.848892 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:12:57.848902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:12:57.848912 kernel: TSC deadline timer available May 15 12:12:57.848922 kernel: CPU topo: Max. logical packages: 1 May 15 12:12:57.848934 kernel: CPU topo: Max. logical dies: 1 May 15 12:12:57.848944 kernel: CPU topo: Max. dies per package: 1 May 15 12:12:57.848954 kernel: CPU topo: Max. threads per core: 1 May 15 12:12:57.848964 kernel: CPU topo: Num. cores per package: 4 May 15 12:12:57.848974 kernel: CPU topo: Num. threads per package: 4 May 15 12:12:57.848984 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 15 12:12:57.848993 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:12:57.849002 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:12:57.849012 kernel: kvm-guest: setup PV sched yield May 15 12:12:57.849022 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:12:57.849035 kernel: Booting paravirtualized kernel on KVM May 15 12:12:57.849045 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:12:57.849055 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 12:12:57.849065 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 15 12:12:57.849075 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 15 12:12:57.849084 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 12:12:57.849094 kernel: kvm-guest: PV spinlocks enabled May 15 12:12:57.849104 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:12:57.849115 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:12:57.849129 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:12:57.849139 kernel: random: crng init done May 15 12:12:57.849149 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:12:57.849159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:12:57.849169 kernel: Fallback order for Node 0: 0 May 15 12:12:57.849179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 15 12:12:57.849189 kernel: Policy zone: DMA32 May 15 12:12:57.849199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:12:57.849213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 12:12:57.849223 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:12:57.849233 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:12:57.849243 kernel: Dynamic Preempt: voluntary May 15 12:12:57.849253 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:12:57.849264 kernel: rcu: RCU event tracing is enabled. May 15 12:12:57.849275 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 12:12:57.849285 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:12:57.849295 kernel: Rude variant of Tasks RCU enabled. May 15 12:12:57.849308 kernel: Tracing variant of Tasks RCU enabled. May 15 12:12:57.849318 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:12:57.849329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 12:12:57.849339 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:12:57.849349 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:12:57.849359 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 12:12:57.849369 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 12:12:57.849379 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:12:57.849401 kernel: Console: colour VGA+ 80x25 May 15 12:12:57.849412 kernel: printk: legacy console [ttyS0] enabled May 15 12:12:57.849422 kernel: ACPI: Core revision 20240827 May 15 12:12:57.849433 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:12:57.849446 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:12:57.849456 kernel: x2apic enabled May 15 12:12:57.849476 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:12:57.849487 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:12:57.849498 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:12:57.849511 kernel: kvm-guest: setup PV IPIs May 15 12:12:57.849521 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:12:57.849532 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 15 12:12:57.849543 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 12:12:57.849570 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:12:57.849581 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:12:57.849592 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:12:57.849602 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:12:57.849616 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:12:57.849626 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:12:57.849633 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:12:57.849641 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 12:12:57.849649 kernel: RETBleed: Mitigation: untrained return thunk May 15 12:12:57.849657 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:12:57.849664 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:12:57.849672 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:12:57.849681 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:12:57.849691 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:12:57.849699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:12:57.849706 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:12:57.849714 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:12:57.849722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:12:57.849729 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 12:12:57.849737 kernel: Freeing SMP alternatives memory: 32K May 15 12:12:57.849745 kernel: pid_max: default: 32768 minimum: 301 May 15 12:12:57.849754 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:12:57.849762 kernel: landlock: Up and running. May 15 12:12:57.849770 kernel: SELinux: Initializing. May 15 12:12:57.849777 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:12:57.849785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:12:57.849793 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 12:12:57.849801 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:12:57.849808 kernel: ... version: 0 May 15 12:12:57.849816 kernel: ... bit width: 48 May 15 12:12:57.849825 kernel: ... generic registers: 6 May 15 12:12:57.849833 kernel: ... value mask: 0000ffffffffffff May 15 12:12:57.849841 kernel: ... max period: 00007fffffffffff May 15 12:12:57.849848 kernel: ... fixed-purpose events: 0 May 15 12:12:57.849856 kernel: ... event mask: 000000000000003f May 15 12:12:57.849863 kernel: signal: max sigframe size: 1776 May 15 12:12:57.849871 kernel: rcu: Hierarchical SRCU implementation. May 15 12:12:57.849879 kernel: rcu: Max phase no-delay instances is 400. May 15 12:12:57.849886 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:12:57.849894 kernel: smp: Bringing up secondary CPUs ... May 15 12:12:57.849904 kernel: smpboot: x86: Booting SMP configuration: May 15 12:12:57.849911 kernel: .... node #0, CPUs: #1 #2 #3 May 15 12:12:57.849921 kernel: smp: Brought up 1 node, 4 CPUs May 15 12:12:57.849931 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 12:12:57.849942 kernel: Memory: 2428908K/2571752K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 136904K reserved, 0K cma-reserved) May 15 12:12:57.849951 kernel: devtmpfs: initialized May 15 12:12:57.849958 kernel: x86/mm: Memory block size: 128MB May 15 12:12:57.849966 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:12:57.849974 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 12:12:57.849984 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:12:57.849992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:12:57.850000 kernel: audit: initializing netlink subsys (disabled) May 15 12:12:57.850007 kernel: audit: type=2000 audit(1747311174.841:1): state=initialized audit_enabled=0 res=1 May 15 12:12:57.850015 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:12:57.850022 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:12:57.850030 kernel: cpuidle: using governor menu May 15 12:12:57.850038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:12:57.850045 kernel: dca service started, version 1.12.1 May 15 12:12:57.850055 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:12:57.850063 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:12:57.850071 kernel: PCI: Using configuration type 1 for base access May 15 12:12:57.850078 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:12:57.850086 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:12:57.850094 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:12:57.850101 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:12:57.850109 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:12:57.850118 kernel: ACPI: Added _OSI(Module Device) May 15 12:12:57.850126 kernel: ACPI: Added _OSI(Processor Device) May 15 12:12:57.850134 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:12:57.850141 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:12:57.850149 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:12:57.850156 kernel: ACPI: Interpreter enabled May 15 12:12:57.850164 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:12:57.850171 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:12:57.850179 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:12:57.850187 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:12:57.850196 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:12:57.850204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:12:57.850391 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:12:57.850592 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:12:57.850824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:12:57.850840 kernel: PCI host bridge to bus 0000:00 May 15 12:12:57.850986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:12:57.851119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:12:57.851248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:12:57.851376 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 12:12:57.851518 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:12:57.851674 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 12:12:57.851804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:12:57.851967 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:12:57.852157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:12:57.852320 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:12:57.852513 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:12:57.852707 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:12:57.852849 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:12:57.853004 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 15 12:12:57.853158 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 15 12:12:57.853308 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:12:57.853456 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:12:57.853682 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 12:12:57.853841 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 15 12:12:57.853992 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:12:57.854144 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:12:57.854345 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:12:57.854518 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 15 12:12:57.854695 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 15 12:12:57.854846 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 12:12:57.854999 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:12:57.855157 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:12:57.855345 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:12:57.855520 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:12:57.855723 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 15 12:12:57.855872 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 15 12:12:57.856024 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:12:57.856165 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:12:57.856179 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:12:57.856195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:12:57.856205 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:12:57.856216 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:12:57.856226 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:12:57.856237 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:12:57.856247 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:12:57.856257 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:12:57.856268 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:12:57.856278 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:12:57.856291 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:12:57.856301 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:12:57.856311 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:12:57.856321 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:12:57.856332 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:12:57.856342 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:12:57.856352 kernel: iommu: Default domain type: Translated May 15 12:12:57.856363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:12:57.856373 kernel: PCI: Using ACPI for IRQ routing May 15 12:12:57.856386 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:12:57.856396 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 12:12:57.856406 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 12:12:57.856572 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:12:57.856745 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:12:57.856882 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:12:57.856896 kernel: vgaarb: loaded May 15 12:12:57.856906 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:12:57.856921 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:12:57.856931 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:12:57.856941 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:12:57.856952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:12:57.856962 kernel: pnp: PnP ACPI init May 15 12:12:57.857108 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:12:57.857124 kernel: pnp: PnP ACPI: found 6 devices May 15 12:12:57.857134 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:12:57.857148 kernel: NET: Registered PF_INET protocol family May 15 12:12:57.857159 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:12:57.857169 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:12:57.857179 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:12:57.857190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:12:57.857201 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:12:57.857211 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:12:57.857222 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:12:57.857233 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:12:57.857246 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:12:57.857256 kernel: NET: Registered PF_XDP protocol family May 15 12:12:57.857390 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:12:57.857526 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:12:57.857678 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:12:57.857810 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 12:12:57.857940 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:12:57.858071 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 12:12:57.858090 kernel: PCI: CLS 0 bytes, default 64 May 15 12:12:57.858101 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 15 12:12:57.858112 kernel: Initialise system trusted keyrings May 15 12:12:57.858123 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:12:57.858133 kernel: Key type asymmetric registered May 15 12:12:57.858144 kernel: Asymmetric key parser 'x509' registered May 15 12:12:57.858154 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:12:57.858165 kernel: io scheduler mq-deadline registered May 15 12:12:57.858176 kernel: io scheduler kyber registered May 15 12:12:57.858186 kernel: io scheduler bfq registered May 15 12:12:57.858199 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:12:57.858210 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:12:57.858221 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:12:57.858232 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 12:12:57.858242 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:12:57.858253 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:12:57.858263 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:12:57.858274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:12:57.858284 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:12:57.858438 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 12:12:57.858454 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:12:57.858626 kernel: rtc_cmos 00:04: registered as rtc0 May 15 12:12:57.858764 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T12:12:57 UTC (1747311177) May 15 12:12:57.858899 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:12:57.858913 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:12:57.858924 kernel: NET: Registered PF_INET6 protocol family May 15 12:12:57.858938 kernel: Segment Routing with IPv6 May 15 12:12:57.858949 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:12:57.858959 kernel: NET: Registered PF_PACKET protocol family May 15 12:12:57.858970 kernel: Key type dns_resolver registered May 15 12:12:57.858980 kernel: IPI shorthand broadcast: enabled May 15 12:12:57.858991 kernel: sched_clock: Marking stable (3082004957, 130528338)->(3442955341, -230422046) May 15 12:12:57.859001 kernel: registered taskstats version 1 May 15 12:12:57.859012 kernel: Loading compiled-in X.509 certificates May 15 12:12:57.859023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:12:57.859036 kernel: Demotion targets for Node 0: null May 15 12:12:57.859046 kernel: Key type .fscrypt registered May 15 12:12:57.859057 kernel: Key type fscrypt-provisioning registered May 15 12:12:57.859067 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:12:57.859078 kernel: ima: Allocated hash algorithm: sha1 May 15 12:12:57.859089 kernel: ima: No architecture policies found May 15 12:12:57.859099 kernel: clk: Disabling unused clocks May 15 12:12:57.859109 kernel: Warning: unable to open an initial console. May 15 12:12:57.859120 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:12:57.859133 kernel: Write protecting the kernel read-only data: 24576k May 15 12:12:57.859144 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:12:57.859155 kernel: Run /init as init process May 15 12:12:57.859165 kernel: with arguments: May 15 12:12:57.859175 kernel: /init May 15 12:12:57.859186 kernel: with environment: May 15 12:12:57.859196 kernel: HOME=/ May 15 12:12:57.859206 kernel: TERM=linux May 15 12:12:57.859216 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:12:57.859235 systemd[1]: Successfully made /usr/ read-only. May 15 12:12:57.859263 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:12:57.859277 systemd[1]: Detected virtualization kvm. May 15 12:12:57.859289 systemd[1]: Detected architecture x86-64. May 15 12:12:57.859300 systemd[1]: Running in initrd. May 15 12:12:57.859314 systemd[1]: No hostname configured, using default hostname. May 15 12:12:57.859326 systemd[1]: Hostname set to . May 15 12:12:57.859337 systemd[1]: Initializing machine ID from VM UUID. May 15 12:12:57.859349 systemd[1]: Queued start job for default target initrd.target. May 15 12:12:57.859360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:12:57.859372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:12:57.859385 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:12:57.859396 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:12:57.859411 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:12:57.859423 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:12:57.859436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:12:57.859448 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:12:57.859460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:12:57.859480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:12:57.859491 systemd[1]: Reached target paths.target - Path Units. May 15 12:12:57.859506 systemd[1]: Reached target slices.target - Slice Units. May 15 12:12:57.859517 systemd[1]: Reached target swap.target - Swaps. May 15 12:12:57.859529 systemd[1]: Reached target timers.target - Timer Units. May 15 12:12:57.859541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:12:57.859575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:12:57.859587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:12:57.859599 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:12:57.859611 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:12:57.859622 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:12:57.859637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:12:57.859649 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:12:57.859660 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:12:57.859672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:12:57.859686 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:12:57.859701 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:12:57.859713 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:12:57.859725 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:12:57.859736 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:12:57.859748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:12:57.859759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:12:57.859774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:12:57.859786 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:12:57.859798 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:12:57.859836 systemd-journald[220]: Collecting audit messages is disabled. May 15 12:12:57.859868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:12:57.859880 systemd-journald[220]: Journal started May 15 12:12:57.859908 systemd-journald[220]: Runtime Journal (/run/log/journal/ff6eaf94f1294658ad87a56214944e9f) is 6M, max 48.6M, 42.5M free. May 15 12:12:57.852716 systemd-modules-load[221]: Inserted module 'overlay' May 15 12:12:57.895185 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:12:57.895204 kernel: Bridge firewalling registered May 15 12:12:57.881576 systemd-modules-load[221]: Inserted module 'br_netfilter' May 15 12:12:57.897856 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:12:57.898271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:12:57.900603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:12:57.905829 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:12:57.908992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:12:57.919277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:12:57.920101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:12:57.930877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:12:57.931198 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:12:57.931198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:12:57.936192 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:12:57.938179 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:12:57.949146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:12:57.952748 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:12:57.981728 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:12:57.991451 systemd-resolved[259]: Positive Trust Anchors: May 15 12:12:57.991476 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:12:57.991507 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:12:57.994111 systemd-resolved[259]: Defaulting to hostname 'linux'. May 15 12:12:57.995319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:12:58.001302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:12:58.088610 kernel: SCSI subsystem initialized May 15 12:12:58.101596 kernel: Loading iSCSI transport class v2.0-870. May 15 12:12:58.111588 kernel: iscsi: registered transport (tcp) May 15 12:12:58.132601 kernel: iscsi: registered transport (qla4xxx) May 15 12:12:58.132683 kernel: QLogic iSCSI HBA Driver May 15 12:12:58.151950 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:12:58.175964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:12:58.176349 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:12:58.237784 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:12:58.242418 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:12:58.299579 kernel: raid6: avx2x4 gen() 27350 MB/s May 15 12:12:58.316581 kernel: raid6: avx2x2 gen() 29503 MB/s May 15 12:12:58.333677 kernel: raid6: avx2x1 gen() 24795 MB/s May 15 12:12:58.333696 kernel: raid6: using algorithm avx2x2 gen() 29503 MB/s May 15 12:12:58.351668 kernel: raid6: .... xor() 18106 MB/s, rmw enabled May 15 12:12:58.351682 kernel: raid6: using avx2x2 recovery algorithm May 15 12:12:58.371575 kernel: xor: automatically using best checksumming function avx May 15 12:12:58.535589 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:12:58.544056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:12:58.545727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:12:58.575067 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 15 12:12:58.580206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:12:58.582307 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:12:58.615747 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation May 15 12:12:58.646461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:12:58.648929 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:12:58.727096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:12:58.732751 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:12:58.758581 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:12:58.780318 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:12:58.780375 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 12:12:58.842941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 12:12:58.843132 kernel: AES CTR mode by8 optimization enabled May 15 12:12:58.843155 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:12:58.843176 kernel: GPT:9289727 != 19775487 May 15 12:12:58.843193 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:12:58.843213 kernel: GPT:9289727 != 19775487 May 15 12:12:58.843234 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:12:58.843251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:12:58.815311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:12:58.815467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:12:58.841508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:12:58.847942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:12:58.851109 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:12:58.852603 kernel: libata version 3.00 loaded. May 15 12:12:58.860372 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:12:58.892668 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:12:58.892685 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:12:58.892831 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:12:58.892962 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:12:58.893091 kernel: scsi host0: ahci May 15 12:12:58.893242 kernel: scsi host1: ahci May 15 12:12:58.893377 kernel: scsi host2: ahci May 15 12:12:58.893852 kernel: scsi host3: ahci May 15 12:12:58.893991 kernel: scsi host4: ahci May 15 12:12:58.894128 kernel: scsi host5: ahci May 15 12:12:58.894259 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 15 12:12:58.894270 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 15 12:12:58.894281 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 15 12:12:58.894291 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 15 12:12:58.894304 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 15 12:12:58.894315 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 15 12:12:58.893706 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 12:12:58.924576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:12:58.941252 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 12:12:58.956363 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 12:12:58.959654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 12:12:58.970468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:12:58.973716 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:12:59.011335 disk-uuid[635]: Primary Header is updated. May 15 12:12:59.011335 disk-uuid[635]: Secondary Entries is updated. May 15 12:12:59.011335 disk-uuid[635]: Secondary Header is updated. May 15 12:12:59.015597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:12:59.019588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:12:59.204739 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:12:59.204820 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 12:12:59.204831 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:12:59.204841 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:12:59.205580 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:12:59.206591 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 12:12:59.207584 kernel: ata3.00: applying bridge limits May 15 12:12:59.208588 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:12:59.208609 kernel: ata3.00: configured for UDMA/100 May 15 12:12:59.209585 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 12:12:59.253150 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 12:12:59.265605 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 12:12:59.265624 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 12:12:59.614642 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:12:59.617686 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:12:59.617800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:12:59.621944 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:12:59.625601 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:12:59.662248 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:13:00.022613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 12:13:00.023185 disk-uuid[636]: The operation has completed successfully. May 15 12:13:00.057813 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:13:00.057984 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:13:00.111729 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:13:00.139432 sh[665]: Success May 15 12:13:00.160679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:13:00.160777 kernel: device-mapper: uevent: version 1.0.3 May 15 12:13:00.160794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:13:00.171591 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:13:00.204720 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:13:00.207380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:13:00.227716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:13:00.239058 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:13:00.239134 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (677) May 15 12:13:00.239600 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:13:00.241734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:13:00.241756 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:13:00.247062 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:13:00.247759 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:13:00.249100 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:13:00.250101 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:13:00.253190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:13:00.279618 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (712) May 15 12:13:00.282202 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:13:00.282237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:13:00.282249 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:13:00.290586 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:13:00.292345 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:13:00.295147 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:13:00.378416 ignition[755]: Ignition 2.21.0 May 15 12:13:00.378430 ignition[755]: Stage: fetch-offline May 15 12:13:00.378460 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 15 12:13:00.378469 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:00.378547 ignition[755]: parsed url from cmdline: "" May 15 12:13:00.378551 ignition[755]: no config URL provided May 15 12:13:00.378571 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:13:00.378579 ignition[755]: no config at "/usr/lib/ignition/user.ign" May 15 12:13:00.378598 ignition[755]: op(1): [started] loading QEMU firmware config module May 15 12:13:00.378602 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 12:13:00.385167 ignition[755]: op(1): [finished] loading QEMU firmware config module May 15 12:13:00.388910 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:13:00.390934 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:13:00.432537 ignition[755]: parsing config with SHA512: 87d923793fff85655bacf729cba3bb12091673af502d7357790d400a3bd1683e7609f85e358fb71599fae8b700008b31814934c5b1eea15c84512a243a7d706c May 15 12:13:00.432999 systemd-networkd[855]: lo: Link UP May 15 12:13:00.433009 systemd-networkd[855]: lo: Gained carrier May 15 12:13:00.434486 systemd-networkd[855]: Enumeration completed May 15 12:13:00.435100 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:13:00.435104 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:13:00.439197 ignition[755]: fetch-offline: fetch-offline passed May 15 12:13:00.435370 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:13:00.439260 ignition[755]: Ignition finished successfully May 15 12:13:00.436124 systemd-networkd[855]: eth0: Link UP May 15 12:13:00.436127 systemd-networkd[855]: eth0: Gained carrier May 15 12:13:00.436135 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:13:00.438297 systemd[1]: Reached target network.target - Network. May 15 12:13:00.438885 unknown[755]: fetched base config from "system" May 15 12:13:00.438892 unknown[755]: fetched user config from "qemu" May 15 12:13:00.443092 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:13:00.445218 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 12:13:00.446071 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:13:00.452633 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:13:00.483353 ignition[859]: Ignition 2.21.0 May 15 12:13:00.483371 ignition[859]: Stage: kargs May 15 12:13:00.483799 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 15 12:13:00.483815 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:00.487093 ignition[859]: kargs: kargs passed May 15 12:13:00.487155 ignition[859]: Ignition finished successfully May 15 12:13:00.491604 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:13:00.492826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:13:00.540787 ignition[868]: Ignition 2.21.0 May 15 12:13:00.540801 ignition[868]: Stage: disks May 15 12:13:00.540965 ignition[868]: no configs at "/usr/lib/ignition/base.d" May 15 12:13:00.540978 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:00.543996 ignition[868]: disks: disks passed May 15 12:13:00.544047 ignition[868]: Ignition finished successfully May 15 12:13:00.548900 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:13:00.550214 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:13:00.552146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:13:00.552376 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:13:00.552884 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:13:00.553210 systemd[1]: Reached target basic.target - Basic System. May 15 12:13:00.554674 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:13:00.585567 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.15 May 15 12:13:00.585582 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 15 12:13:00.587136 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:13:00.596457 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:13:00.600098 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:13:00.722612 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:13:00.723751 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:13:00.725300 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:13:00.728013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:13:00.729007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:13:00.730879 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:13:00.730931 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:13:00.730963 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:13:00.752213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:13:00.755143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:13:00.759026 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (886) May 15 12:13:00.760578 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:13:00.760594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:13:00.762611 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:13:00.766423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:13:00.798910 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:13:00.804569 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory May 15 12:13:00.808421 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:13:00.813545 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:13:00.899513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:13:00.901878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:13:00.903868 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:13:00.928606 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:13:00.941192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:13:00.954103 ignition[1001]: INFO : Ignition 2.21.0 May 15 12:13:00.954103 ignition[1001]: INFO : Stage: mount May 15 12:13:00.955814 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:13:00.955814 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:00.958022 ignition[1001]: INFO : mount: mount passed May 15 12:13:00.958022 ignition[1001]: INFO : Ignition finished successfully May 15 12:13:00.959550 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:13:00.961872 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:13:01.237586 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:13:01.239268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:13:01.266161 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1013) May 15 12:13:01.266201 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:13:01.266212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:13:01.267649 kernel: BTRFS info (device vda6): using free-space-tree May 15 12:13:01.271332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:13:01.304515 ignition[1030]: INFO : Ignition 2.21.0 May 15 12:13:01.304515 ignition[1030]: INFO : Stage: files May 15 12:13:01.306354 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:13:01.306354 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:01.309158 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping May 15 12:13:01.309158 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:13:01.309158 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:13:01.313723 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:13:01.313723 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:13:01.313723 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:13:01.313723 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:13:01.311761 unknown[1030]: wrote ssh authorized keys file for user: core May 15 12:13:01.321572 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 12:13:01.360911 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:13:01.541816 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:13:01.541816 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:13:01.546586 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:13:01.561773 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 12:13:02.099024 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 12:13:02.237020 systemd-networkd[855]: eth0: Gained IPv6LL May 15 12:13:03.085262 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:13:03.085262 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 12:13:03.089600 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:13:03.096231 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:13:03.096231 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 12:13:03.096231 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 12:13:03.101149 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:13:03.101149 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 12:13:03.104989 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 12:13:03.104989 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 12:13:03.189398 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:13:03.195304 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 12:13:03.196950 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 12:13:03.196950 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 12:13:03.196950 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:13:03.196950 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:13:03.196950 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:13:03.196950 ignition[1030]: INFO : files: files passed May 15 12:13:03.196950 ignition[1030]: INFO : Ignition finished successfully May 15 12:13:03.208802 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:13:03.211110 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:13:03.213111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:13:03.225718 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:13:03.225872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:13:03.229801 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory May 15 12:13:03.232864 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:13:03.232864 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:13:03.236197 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:13:03.239444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:13:03.239793 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:13:03.243965 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:13:03.309183 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:13:03.309342 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:13:03.311768 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:13:03.314035 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:13:03.316182 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:13:03.317147 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:13:03.345820 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:13:03.349893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:13:03.381903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:13:03.384351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:13:03.385670 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:13:03.387919 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:13:03.388056 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:13:03.391905 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:13:03.392066 systemd[1]: Stopped target basic.target - Basic System. May 15 12:13:03.393994 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:13:03.395854 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:13:03.396186 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:13:03.396504 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:13:03.402687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:13:03.404767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:13:03.406973 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:13:03.408191 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:13:03.410110 systemd[1]: Stopped target swap.target - Swaps. May 15 12:13:03.410420 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:13:03.410535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:13:03.413899 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:13:03.414262 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:13:03.414581 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:13:03.414696 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:13:03.420225 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:13:03.420340 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:13:03.424491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:13:03.424620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:13:03.425587 systemd[1]: Stopped target paths.target - Path Units. May 15 12:13:03.427550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:13:03.427657 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:13:03.429402 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:13:03.429907 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:13:03.434165 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:13:03.434256 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:13:03.435254 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:13:03.435340 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:13:03.437633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:13:03.437778 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:13:03.439383 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:13:03.439485 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:13:03.442349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:13:03.443042 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:13:03.443168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:13:03.447200 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:13:03.449172 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:13:03.449308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:13:03.450612 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:13:03.450716 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:13:03.466161 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:13:03.467269 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:13:03.483030 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:13:03.488875 ignition[1086]: INFO : Ignition 2.21.0 May 15 12:13:03.488875 ignition[1086]: INFO : Stage: umount May 15 12:13:03.491784 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:13:03.491784 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 12:13:03.494188 ignition[1086]: INFO : umount: umount passed May 15 12:13:03.494188 ignition[1086]: INFO : Ignition finished successfully May 15 12:13:03.495753 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:13:03.495892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:13:03.497915 systemd[1]: Stopped target network.target - Network. May 15 12:13:03.499287 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:13:03.499366 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:13:03.501491 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:13:03.501539 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:13:03.502506 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:13:03.502577 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:13:03.504742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:13:03.504792 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:13:03.507041 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:13:03.509302 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:13:03.517689 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:13:03.517843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:13:03.521711 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:13:03.521982 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:13:03.522119 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:13:03.526178 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:13:03.526946 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:13:03.527582 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:13:03.527649 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:13:03.531682 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:13:03.532629 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:13:03.532693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:13:03.534153 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:13:03.534228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:13:03.537438 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:13:03.537510 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:13:03.537995 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:13:03.538070 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:13:03.542890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:13:03.548793 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:13:03.548881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:13:03.563301 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:13:03.567707 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:13:03.568133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:13:03.568185 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:13:03.571375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:13:03.571413 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:13:03.573449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:13:03.573499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:13:03.577604 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:13:03.577671 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:13:03.579151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:13:03.579229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:13:03.585528 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:13:03.586726 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:13:03.586778 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:13:03.590273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:13:03.590344 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:13:03.594035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:13:03.594094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:13:03.599017 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 12:13:03.599078 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 12:13:03.599123 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:13:03.599496 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:13:03.603692 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:13:03.612615 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:13:03.612723 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:13:03.654328 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:13:03.654465 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:13:03.656603 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:13:03.657226 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:13:03.657281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:13:03.659933 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:13:03.686136 systemd[1]: Switching root. May 15 12:13:03.729231 systemd-journald[220]: Journal stopped May 15 12:13:05.164086 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 15 12:13:05.164151 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:13:05.164172 kernel: SELinux: policy capability open_perms=1 May 15 12:13:05.164184 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:13:05.164194 kernel: SELinux: policy capability always_check_network=0 May 15 12:13:05.164205 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:13:05.164217 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:13:05.164233 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:13:05.164244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:13:05.164255 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:13:05.164275 kernel: audit: type=1403 audit(1747311184.204:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:13:05.164294 systemd[1]: Successfully loaded SELinux policy in 54.128ms. May 15 12:13:05.164314 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.291ms. May 15 12:13:05.164331 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:13:05.164344 systemd[1]: Detected virtualization kvm. May 15 12:13:05.164355 systemd[1]: Detected architecture x86-64. May 15 12:13:05.164367 systemd[1]: Detected first boot. May 15 12:13:05.164379 systemd[1]: Initializing machine ID from VM UUID. May 15 12:13:05.164391 zram_generator::config[1131]: No configuration found. May 15 12:13:05.164406 kernel: Guest personality initialized and is inactive May 15 12:13:05.164417 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:13:05.164428 kernel: Initialized host personality May 15 12:13:05.164439 kernel: NET: Registered PF_VSOCK protocol family May 15 12:13:05.164454 systemd[1]: Populated /etc with preset unit settings. May 15 12:13:05.164467 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:13:05.164478 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:13:05.164491 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:13:05.164502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:13:05.164516 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:13:05.164528 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:13:05.164540 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:13:05.164565 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:13:05.164577 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:13:05.164589 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:13:05.164604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:13:05.164621 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:13:05.164635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:13:05.164651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:13:05.164662 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:13:05.164674 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:13:05.164686 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:13:05.164698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:13:05.164710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:13:05.164723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:13:05.164737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:13:05.164749 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:13:05.164760 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:13:05.164772 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:13:05.164785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:13:05.164797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:13:05.164809 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:13:05.164821 systemd[1]: Reached target slices.target - Slice Units. May 15 12:13:05.164833 systemd[1]: Reached target swap.target - Swaps. May 15 12:13:05.164845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:13:05.164859 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:13:05.164871 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:13:05.164883 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:13:05.164895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:13:05.164907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:13:05.164919 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:13:05.164931 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:13:05.164943 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:13:05.164955 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:13:05.164969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:05.164981 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:13:05.164993 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:13:05.165004 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:13:05.165017 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:13:05.165029 systemd[1]: Reached target machines.target - Containers. May 15 12:13:05.165041 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:13:05.165053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:13:05.165067 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:13:05.165079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:13:05.165091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:13:05.165103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:13:05.165115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:13:05.165127 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:13:05.165139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:13:05.165152 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:13:05.165166 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:13:05.165178 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:13:05.165190 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:13:05.165201 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:13:05.165214 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:13:05.165226 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:13:05.165237 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:13:05.165249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:13:05.165261 kernel: loop: module loaded May 15 12:13:05.165282 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:13:05.165295 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:13:05.165307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:13:05.165319 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:13:05.165331 systemd[1]: Stopped verity-setup.service. May 15 12:13:05.165345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:05.165357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:13:05.165369 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:13:05.165382 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:13:05.165396 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:13:05.165409 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:13:05.165421 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:13:05.165447 kernel: ACPI: bus type drm_connector registered May 15 12:13:05.165459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:13:05.165470 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:13:05.165482 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:13:05.165493 kernel: fuse: init (API version 7.41) May 15 12:13:05.165505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:13:05.165520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:13:05.165537 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:13:05.165667 systemd-journald[1195]: Collecting audit messages is disabled. May 15 12:13:05.165693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:13:05.165706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:13:05.165719 systemd-journald[1195]: Journal started May 15 12:13:05.165740 systemd-journald[1195]: Runtime Journal (/run/log/journal/ff6eaf94f1294658ad87a56214944e9f) is 6M, max 48.6M, 42.5M free. May 15 12:13:04.771843 systemd[1]: Queued start job for default target multi-user.target. May 15 12:13:04.796815 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 12:13:04.797409 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:13:05.167438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:13:05.171588 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:13:05.172615 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:13:05.174099 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:13:05.174325 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:13:05.175684 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:13:05.175885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:13:05.177278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:13:05.178768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:13:05.180432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:13:05.181952 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:13:05.196011 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:13:05.198717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:13:05.200981 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:13:05.202367 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:13:05.202398 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:13:05.204575 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:13:05.212681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:13:05.214769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:13:05.216811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:13:05.219628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:13:05.221128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:13:05.223801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:13:05.225099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:13:05.226327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:13:05.228926 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:13:05.232526 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:13:05.235736 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:13:05.337685 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:13:05.354584 kernel: loop0: detected capacity change from 0 to 113872 May 15 12:13:05.354686 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:13:05.356203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:13:05.361715 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:13:05.363016 systemd-journald[1195]: Time spent on flushing to /var/log/journal/ff6eaf94f1294658ad87a56214944e9f is 14.883ms for 986 entries. May 15 12:13:05.363016 systemd-journald[1195]: System Journal (/var/log/journal/ff6eaf94f1294658ad87a56214944e9f) is 8M, max 195.6M, 187.6M free. May 15 12:13:05.384253 systemd-journald[1195]: Received client request to flush runtime journal. May 15 12:13:05.384310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:13:05.364536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:13:05.370970 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:13:05.387111 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:13:05.391030 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:13:05.395253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:13:05.404581 kernel: loop1: detected capacity change from 0 to 146240 May 15 12:13:05.417393 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:13:05.432646 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 15 12:13:05.432665 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. May 15 12:13:05.439478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:13:05.442577 kernel: loop2: detected capacity change from 0 to 218376 May 15 12:13:05.496690 kernel: loop3: detected capacity change from 0 to 113872 May 15 12:13:05.507638 kernel: loop4: detected capacity change from 0 to 146240 May 15 12:13:05.526595 kernel: loop5: detected capacity change from 0 to 218376 May 15 12:13:05.533025 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 12:13:05.533596 (sd-merge)[1273]: Merged extensions into '/usr'. May 15 12:13:05.540987 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:13:05.541006 systemd[1]: Reloading... May 15 12:13:05.624644 zram_generator::config[1297]: No configuration found. May 15 12:13:05.760234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:13:05.857863 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:13:05.858168 systemd[1]: Reloading finished in 316 ms. May 15 12:13:05.903656 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:13:05.916672 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:13:05.918641 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:13:05.950840 systemd[1]: Starting ensure-sysext.service... May 15 12:13:05.953412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:13:05.965948 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 15 12:13:05.965975 systemd[1]: Reloading... May 15 12:13:05.994619 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:13:05.995115 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:13:05.995415 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:13:05.995701 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:13:05.996601 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:13:05.996857 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 15 12:13:05.996932 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 15 12:13:06.001062 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:13:06.001075 systemd-tmpfiles[1337]: Skipping /boot May 15 12:13:06.014209 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:13:06.014875 systemd-tmpfiles[1337]: Skipping /boot May 15 12:13:06.063598 zram_generator::config[1367]: No configuration found. May 15 12:13:06.179801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:13:06.275411 systemd[1]: Reloading finished in 308 ms. May 15 12:13:06.302722 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:13:06.333512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:13:06.346323 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:13:06.349692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:13:06.371764 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:13:06.377515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:13:06.383402 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:13:06.388729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:13:06.393114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:06.393305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:13:06.400439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:13:06.404949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:13:06.409262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:13:06.410756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:13:06.410869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:13:06.416309 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:13:06.480138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:06.483628 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:13:06.486458 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:13:06.488717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:13:06.495621 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:13:06.496805 augenrules[1432]: No rules May 15 12:13:06.498080 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:13:06.498408 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:13:06.501158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:13:06.501464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:13:06.503607 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:13:06.503821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:13:06.514426 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:13:06.514968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:13:06.517326 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:13:06.518720 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:13:06.518739 systemd-udevd[1411]: Using default interface naming scheme 'v255'. May 15 12:13:06.524217 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:13:06.532297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:06.536830 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:13:06.543856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:13:06.553748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:13:06.556918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:13:06.560706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:13:06.562856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:13:06.564315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:13:06.564435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:13:06.564588 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:13:06.564676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:13:06.567745 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:13:06.570080 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:13:06.572964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:13:06.575299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:13:06.576528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:13:06.578727 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:13:06.579174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:13:06.584697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:13:06.587097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:13:06.589368 systemd[1]: Finished ensure-sysext.service. May 15 12:13:06.604082 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:13:06.605669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:13:06.610691 augenrules[1445]: /sbin/augenrules: No change May 15 12:13:06.624408 augenrules[1504]: No rules May 15 12:13:06.625845 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:13:06.627705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:13:06.627779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:13:06.632747 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:13:06.634694 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:13:06.636623 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:13:06.657121 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:13:06.702767 systemd-resolved[1407]: Positive Trust Anchors: May 15 12:13:06.702788 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:13:06.702832 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:13:06.713140 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 15 12:13:06.718198 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:13:06.725404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:13:06.760852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 12:13:06.764152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:13:06.772599 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:13:06.791628 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:13:06.795575 kernel: ACPI: button: Power Button [PWRF] May 15 12:13:06.798927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:13:06.830381 systemd-networkd[1509]: lo: Link UP May 15 12:13:06.830396 systemd-networkd[1509]: lo: Gained carrier May 15 12:13:06.832122 systemd-networkd[1509]: Enumeration completed May 15 12:13:06.832234 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:13:06.833586 systemd[1]: Reached target network.target - Network. May 15 12:13:06.834928 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:13:06.834941 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:13:06.836811 systemd-networkd[1509]: eth0: Link UP May 15 12:13:06.836930 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:13:06.840336 systemd-networkd[1509]: eth0: Gained carrier May 15 12:13:06.840374 systemd-networkd[1509]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:13:06.840588 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:13:06.842043 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:13:06.842339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:13:06.848125 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:13:06.850040 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:13:06.851302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:13:06.852612 systemd-networkd[1509]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 12:13:06.852616 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:13:06.853761 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. May 15 12:13:06.853946 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:13:07.515052 systemd-resolved[1407]: Clock change detected. Flushing caches. May 15 12:13:07.515156 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 12:13:07.515201 systemd-timesyncd[1510]: Initial clock synchronization to Thu 2025-05-15 12:13:07.515008 UTC. May 15 12:13:07.515455 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:13:07.516839 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:13:07.516868 systemd[1]: Reached target paths.target - Path Units. May 15 12:13:07.517923 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:13:07.519290 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:13:07.520613 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:13:07.522029 systemd[1]: Reached target timers.target - Timer Units. May 15 12:13:07.524771 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:13:07.528202 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:13:07.536030 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:13:07.537534 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:13:07.538835 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:13:07.582485 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:13:07.585116 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:13:07.590859 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:13:07.592569 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:13:07.596493 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:13:07.597671 systemd[1]: Reached target basic.target - Basic System. May 15 12:13:07.598997 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:13:07.599029 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:13:07.600350 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:13:07.603930 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:13:07.606032 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:13:07.615916 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:13:07.618893 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:13:07.620021 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:13:07.621923 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:13:07.627828 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:13:07.633758 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:13:07.635940 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:13:07.638265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:13:07.645782 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache May 15 12:13:07.645797 oslogin_cache_refresh[1552]: Refreshing passwd entry cache May 15 12:13:07.646745 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:13:07.648805 jq[1550]: false May 15 12:13:07.649149 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:13:07.649751 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:13:07.653674 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:13:07.659556 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting May 15 12:13:07.659556 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:13:07.659523 oslogin_cache_refresh[1552]: Failure getting users, quitting May 15 12:13:07.659667 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache May 15 12:13:07.659545 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:13:07.659598 oslogin_cache_refresh[1552]: Refreshing group entry cache May 15 12:13:07.667447 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting May 15 12:13:07.667447 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:13:07.667440 oslogin_cache_refresh[1552]: Failure getting groups, quitting May 15 12:13:07.667451 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:13:07.677857 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:13:07.691161 extend-filesystems[1551]: Found loop3 May 15 12:13:07.691161 extend-filesystems[1551]: Found loop4 May 15 12:13:07.691161 extend-filesystems[1551]: Found loop5 May 15 12:13:07.691161 extend-filesystems[1551]: Found sr0 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda May 15 12:13:07.691161 extend-filesystems[1551]: Found vda1 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda2 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda3 May 15 12:13:07.691161 extend-filesystems[1551]: Found usr May 15 12:13:07.691161 extend-filesystems[1551]: Found vda4 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda6 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda7 May 15 12:13:07.691161 extend-filesystems[1551]: Found vda9 May 15 12:13:07.691161 extend-filesystems[1551]: Checking size of /dev/vda9 May 15 12:13:07.702592 update_engine[1562]: I20250515 12:13:07.694777 1562 main.cc:92] Flatcar Update Engine starting May 15 12:13:07.734135 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:13:07.741660 jq[1569]: true May 15 12:13:07.748833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:13:07.749162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:13:07.749858 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:13:07.750459 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:13:07.752286 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:13:07.752852 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:13:07.757711 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:13:07.757981 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:13:07.778204 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:13:07.789880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:13:07.805179 extend-filesystems[1551]: Resized partition /dev/vda9 May 15 12:13:07.809224 jq[1574]: true May 15 12:13:07.813721 extend-filesystems[1589]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:13:07.821719 kernel: kvm_amd: TSC scaling supported May 15 12:13:07.821763 kernel: kvm_amd: Nested Virtualization enabled May 15 12:13:07.821776 kernel: kvm_amd: Nested Paging enabled May 15 12:13:07.821817 kernel: kvm_amd: LBR virtualization supported May 15 12:13:07.822961 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 12:13:07.822993 kernel: kvm_amd: Virtual GIF supported May 15 12:13:07.827678 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 12:13:07.833764 tar[1573]: linux-amd64/LICENSE May 15 12:13:07.835335 tar[1573]: linux-amd64/helm May 15 12:13:07.844400 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:13:07.844429 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:13:07.846799 systemd-logind[1559]: New seat seat0. May 15 12:13:07.856422 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:13:07.905204 dbus-daemon[1547]: [system] SELinux support is enabled May 15 12:13:08.023136 kernel: EDAC MC: Ver: 3.0.0 May 15 12:13:07.906849 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:13:07.918312 dbus-daemon[1547]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 12:13:08.023408 update_engine[1562]: I20250515 12:13:07.928819 1562 update_check_scheduler.cc:74] Next update check in 6m40s May 15 12:13:07.909759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:13:07.909785 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:13:07.909873 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:13:07.909890 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:13:07.933087 systemd[1]: Started update-engine.service - Update Engine. May 15 12:13:07.962037 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:13:08.008778 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:13:08.027741 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:13:08.051150 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:13:08.055327 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:13:08.081780 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:13:08.082174 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:13:08.090481 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:13:08.127756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:13:08.137963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:13:08.141171 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:13:08.143992 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:13:08.145480 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:13:08.188688 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 12:13:08.245068 extend-filesystems[1589]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 12:13:08.245068 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 12:13:08.245068 extend-filesystems[1589]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 12:13:08.250364 extend-filesystems[1551]: Resized filesystem in /dev/vda9 May 15 12:13:08.246548 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:13:08.246894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:13:08.253447 bash[1610]: Updated "/home/core/.ssh/authorized_keys" May 15 12:13:08.254773 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:13:08.258719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 12:13:08.299472 containerd[1575]: time="2025-05-15T12:13:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:13:08.302660 containerd[1575]: time="2025-05-15T12:13:08.302395592Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:13:08.311170 containerd[1575]: time="2025-05-15T12:13:08.311111560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.58µs" May 15 12:13:08.311170 containerd[1575]: time="2025-05-15T12:13:08.311160222Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:13:08.311275 containerd[1575]: time="2025-05-15T12:13:08.311191110Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:13:08.311424 containerd[1575]: time="2025-05-15T12:13:08.311398248Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:13:08.311424 containerd[1575]: time="2025-05-15T12:13:08.311419348Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:13:08.311481 containerd[1575]: time="2025-05-15T12:13:08.311451157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:13:08.311572 containerd[1575]: time="2025-05-15T12:13:08.311525697Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:13:08.311572 containerd[1575]: time="2025-05-15T12:13:08.311543952Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:13:08.311900 containerd[1575]: time="2025-05-15T12:13:08.311877858Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:13:08.311900 containerd[1575]: time="2025-05-15T12:13:08.311894870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:13:08.311971 containerd[1575]: time="2025-05-15T12:13:08.311909267Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:13:08.311971 containerd[1575]: time="2025-05-15T12:13:08.311921840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:13:08.312024 containerd[1575]: time="2025-05-15T12:13:08.312013452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:13:08.312324 containerd[1575]: time="2025-05-15T12:13:08.312268450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:13:08.312324 containerd[1575]: time="2025-05-15T12:13:08.312329916Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:13:08.312472 containerd[1575]: time="2025-05-15T12:13:08.312342239Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:13:08.312472 containerd[1575]: time="2025-05-15T12:13:08.312423020Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:13:08.312829 containerd[1575]: time="2025-05-15T12:13:08.312785280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:13:08.312960 containerd[1575]: time="2025-05-15T12:13:08.312938056Z" level=info msg="metadata content store policy set" policy=shared May 15 12:13:08.319760 containerd[1575]: time="2025-05-15T12:13:08.319719246Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:13:08.319806 containerd[1575]: time="2025-05-15T12:13:08.319776443Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:13:08.319806 containerd[1575]: time="2025-05-15T12:13:08.319796801Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:13:08.319843 containerd[1575]: time="2025-05-15T12:13:08.319814094Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:13:08.319843 containerd[1575]: time="2025-05-15T12:13:08.319831336Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:13:08.319896 containerd[1575]: time="2025-05-15T12:13:08.319844531Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:13:08.319896 containerd[1575]: time="2025-05-15T12:13:08.319866172Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:13:08.319896 containerd[1575]: time="2025-05-15T12:13:08.319881901Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:13:08.319952 containerd[1575]: time="2025-05-15T12:13:08.319896799Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:13:08.319952 containerd[1575]: time="2025-05-15T12:13:08.319910966Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:13:08.319952 containerd[1575]: time="2025-05-15T12:13:08.319922628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:13:08.319952 containerd[1575]: time="2025-05-15T12:13:08.319938497Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:13:08.320121 containerd[1575]: time="2025-05-15T12:13:08.320092466Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:13:08.320146 containerd[1575]: time="2025-05-15T12:13:08.320127532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:13:08.320165 containerd[1575]: time="2025-05-15T12:13:08.320146938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:13:08.320185 containerd[1575]: time="2025-05-15T12:13:08.320165122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:13:08.320205 containerd[1575]: time="2025-05-15T12:13:08.320181092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:13:08.320205 containerd[1575]: time="2025-05-15T12:13:08.320195900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:13:08.320247 containerd[1575]: time="2025-05-15T12:13:08.320209816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:13:08.320247 containerd[1575]: time="2025-05-15T12:13:08.320222881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:13:08.320247 containerd[1575]: time="2025-05-15T12:13:08.320236296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:13:08.320314 containerd[1575]: time="2025-05-15T12:13:08.320249430Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:13:08.320314 containerd[1575]: time="2025-05-15T12:13:08.320263076Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:13:08.320422 containerd[1575]: time="2025-05-15T12:13:08.320352133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:13:08.320446 containerd[1575]: time="2025-05-15T12:13:08.320424469Z" level=info msg="Start snapshots syncer" May 15 12:13:08.320466 containerd[1575]: time="2025-05-15T12:13:08.320453734Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:13:08.320813 containerd[1575]: time="2025-05-15T12:13:08.320763314Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:13:08.320920 containerd[1575]: time="2025-05-15T12:13:08.320828366Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:13:08.321696 containerd[1575]: time="2025-05-15T12:13:08.321666949Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:13:08.321824 containerd[1575]: time="2025-05-15T12:13:08.321791533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:13:08.321824 containerd[1575]: time="2025-05-15T12:13:08.321821850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:13:08.321909 containerd[1575]: time="2025-05-15T12:13:08.321835245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:13:08.321909 containerd[1575]: time="2025-05-15T12:13:08.321860623Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:13:08.321909 containerd[1575]: time="2025-05-15T12:13:08.321874859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:13:08.321909 containerd[1575]: time="2025-05-15T12:13:08.321887794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:13:08.321909 containerd[1575]: time="2025-05-15T12:13:08.321901209Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:13:08.322008 containerd[1575]: time="2025-05-15T12:13:08.321935193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:13:08.322008 containerd[1575]: time="2025-05-15T12:13:08.321948918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:13:08.322008 containerd[1575]: time="2025-05-15T12:13:08.321961642Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:13:08.322655 containerd[1575]: time="2025-05-15T12:13:08.322609638Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:13:08.322695 containerd[1575]: time="2025-05-15T12:13:08.322672135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:13:08.322695 containerd[1575]: time="2025-05-15T12:13:08.322686742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:13:08.322742 containerd[1575]: time="2025-05-15T12:13:08.322699025Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:13:08.322742 containerd[1575]: time="2025-05-15T12:13:08.322709695Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:13:08.322742 containerd[1575]: time="2025-05-15T12:13:08.322720956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:13:08.322742 containerd[1575]: time="2025-05-15T12:13:08.322732578Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:13:08.322842 containerd[1575]: time="2025-05-15T12:13:08.322752856Z" level=info msg="runtime interface created" May 15 12:13:08.322842 containerd[1575]: time="2025-05-15T12:13:08.322761122Z" level=info msg="created NRI interface" May 15 12:13:08.322842 containerd[1575]: time="2025-05-15T12:13:08.322775799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:13:08.322842 containerd[1575]: time="2025-05-15T12:13:08.322788954Z" level=info msg="Connect containerd service" May 15 12:13:08.322842 containerd[1575]: time="2025-05-15T12:13:08.322820263Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:13:08.323727 containerd[1575]: time="2025-05-15T12:13:08.323702578Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:13:08.416363 tar[1573]: linux-amd64/README.md May 15 12:13:08.422908 containerd[1575]: time="2025-05-15T12:13:08.422797241Z" level=info msg="Start subscribing containerd event" May 15 12:13:08.422908 containerd[1575]: time="2025-05-15T12:13:08.422879024Z" level=info msg="Start recovering state" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423034265Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423142729Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423040557Z" level=info msg="Start event monitor" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423179618Z" level=info msg="Start cni network conf syncer for default" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423188946Z" level=info msg="Start streaming server" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423199275Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423206709Z" level=info msg="runtime interface starting up..." May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423212580Z" level=info msg="starting plugins..." May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423239360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:13:08.423600 containerd[1575]: time="2025-05-15T12:13:08.423383110Z" level=info msg="containerd successfully booted in 0.124610s" May 15 12:13:08.423583 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:13:08.436511 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:13:09.360814 systemd-networkd[1509]: eth0: Gained IPv6LL May 15 12:13:09.364047 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:13:09.366038 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:13:09.368770 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 12:13:09.395005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:09.397780 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:13:09.415627 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 12:13:09.416010 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 12:13:09.417966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:13:09.422721 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:13:10.091408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:10.093257 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:13:10.094622 systemd[1]: Startup finished in 3.164s (kernel) + 6.558s (initrd) + 5.283s (userspace) = 15.005s. May 15 12:13:10.095267 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:13:10.506841 kubelet[1688]: E0515 12:13:10.506684 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:13:10.510440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:13:10.510690 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:13:10.511083 systemd[1]: kubelet.service: Consumed 944ms CPU time, 253M memory peak. May 15 12:13:11.509102 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:13:11.510443 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:41898.service - OpenSSH per-connection server daemon (10.0.0.1:41898). May 15 12:13:11.582813 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 41898 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:11.584798 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:11.591121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:13:11.592249 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:13:11.599468 systemd-logind[1559]: New session 1 of user core. May 15 12:13:11.627447 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:13:11.630488 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:13:11.644176 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:13:11.646504 systemd-logind[1559]: New session c1 of user core. May 15 12:13:11.803442 systemd[1705]: Queued start job for default target default.target. May 15 12:13:11.820015 systemd[1705]: Created slice app.slice - User Application Slice. May 15 12:13:11.820041 systemd[1705]: Reached target paths.target - Paths. May 15 12:13:11.820093 systemd[1705]: Reached target timers.target - Timers. May 15 12:13:11.821686 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:13:11.832138 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:13:11.832279 systemd[1705]: Reached target sockets.target - Sockets. May 15 12:13:11.832323 systemd[1705]: Reached target basic.target - Basic System. May 15 12:13:11.832368 systemd[1705]: Reached target default.target - Main User Target. May 15 12:13:11.832405 systemd[1705]: Startup finished in 178ms. May 15 12:13:11.832807 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:13:11.843788 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:13:11.914386 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:41912.service - OpenSSH per-connection server daemon (10.0.0.1:41912). May 15 12:13:11.975453 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 41912 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:11.976685 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:11.980699 systemd-logind[1559]: New session 2 of user core. May 15 12:13:11.990769 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:13:12.042940 sshd[1718]: Connection closed by 10.0.0.1 port 41912 May 15 12:13:12.043220 sshd-session[1716]: pam_unix(sshd:session): session closed for user core May 15 12:13:12.052963 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:41912.service: Deactivated successfully. May 15 12:13:12.054579 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:13:12.055390 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. May 15 12:13:12.057810 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:41914.service - OpenSSH per-connection server daemon (10.0.0.1:41914). May 15 12:13:12.058543 systemd-logind[1559]: Removed session 2. May 15 12:13:12.101054 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 41914 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:12.102475 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:12.106973 systemd-logind[1559]: New session 3 of user core. May 15 12:13:12.118779 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:13:12.169507 sshd[1726]: Connection closed by 10.0.0.1 port 41914 May 15 12:13:12.169945 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 15 12:13:12.187432 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:41914.service: Deactivated successfully. May 15 12:13:12.189609 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:13:12.190438 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. May 15 12:13:12.193862 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:41926.service - OpenSSH per-connection server daemon (10.0.0.1:41926). May 15 12:13:12.194512 systemd-logind[1559]: Removed session 3. May 15 12:13:12.237138 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:12.238839 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:12.243994 systemd-logind[1559]: New session 4 of user core. May 15 12:13:12.258907 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:13:12.312538 sshd[1734]: Connection closed by 10.0.0.1 port 41926 May 15 12:13:12.312865 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 15 12:13:12.326448 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:41926.service: Deactivated successfully. May 15 12:13:12.328275 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:13:12.328993 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. May 15 12:13:12.331777 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:41938.service - OpenSSH per-connection server daemon (10.0.0.1:41938). May 15 12:13:12.332530 systemd-logind[1559]: Removed session 4. May 15 12:13:12.375245 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:12.376934 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:12.381519 systemd-logind[1559]: New session 5 of user core. May 15 12:13:12.390857 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:13:12.450596 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:13:12.450909 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:13:12.472707 sudo[1743]: pam_unix(sudo:session): session closed for user root May 15 12:13:12.474564 sshd[1742]: Connection closed by 10.0.0.1 port 41938 May 15 12:13:12.474947 sshd-session[1740]: pam_unix(sshd:session): session closed for user core May 15 12:13:12.489541 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:41938.service: Deactivated successfully. May 15 12:13:12.491386 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:13:12.492193 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. May 15 12:13:12.495272 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:41954.service - OpenSSH per-connection server daemon (10.0.0.1:41954). May 15 12:13:12.495851 systemd-logind[1559]: Removed session 5. May 15 12:13:12.549344 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 41954 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:12.551461 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:12.555904 systemd-logind[1559]: New session 6 of user core. May 15 12:13:12.567872 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:13:12.622541 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:13:12.622934 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:13:12.630762 sudo[1753]: pam_unix(sudo:session): session closed for user root May 15 12:13:12.638356 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:13:12.638724 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:13:12.649135 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:13:12.698387 augenrules[1775]: No rules May 15 12:13:12.700207 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:13:12.700517 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:13:12.701821 sudo[1752]: pam_unix(sudo:session): session closed for user root May 15 12:13:12.703396 sshd[1751]: Connection closed by 10.0.0.1 port 41954 May 15 12:13:12.703945 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 15 12:13:12.712395 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:41954.service: Deactivated successfully. May 15 12:13:12.714196 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:13:12.715015 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. May 15 12:13:12.717827 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:41964.service - OpenSSH per-connection server daemon (10.0.0.1:41964). May 15 12:13:12.718545 systemd-logind[1559]: Removed session 6. May 15 12:13:12.774263 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 41964 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:13:12.775882 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:13:12.780631 systemd-logind[1559]: New session 7 of user core. May 15 12:13:12.790883 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:13:12.844504 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:13:12.844984 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:13:13.675029 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:13:13.703015 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:13:14.227419 dockerd[1807]: time="2025-05-15T12:13:14.227336437Z" level=info msg="Starting up" May 15 12:13:14.228224 dockerd[1807]: time="2025-05-15T12:13:14.228197993Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:13:15.328931 dockerd[1807]: time="2025-05-15T12:13:15.328861694Z" level=info msg="Loading containers: start." May 15 12:13:15.340678 kernel: Initializing XFRM netlink socket May 15 12:13:15.583932 systemd-networkd[1509]: docker0: Link UP May 15 12:13:15.595570 dockerd[1807]: time="2025-05-15T12:13:15.595474313Z" level=info msg="Loading containers: done." May 15 12:13:15.610242 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1354985258-merged.mount: Deactivated successfully. May 15 12:13:15.618730 dockerd[1807]: time="2025-05-15T12:13:15.618627131Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:13:15.618870 dockerd[1807]: time="2025-05-15T12:13:15.618776140Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:13:15.618987 dockerd[1807]: time="2025-05-15T12:13:15.618951229Z" level=info msg="Initializing buildkit" May 15 12:13:15.741004 dockerd[1807]: time="2025-05-15T12:13:15.740727045Z" level=info msg="Completed buildkit initialization" May 15 12:13:15.747572 dockerd[1807]: time="2025-05-15T12:13:15.747494189Z" level=info msg="Daemon has completed initialization" May 15 12:13:15.747750 dockerd[1807]: time="2025-05-15T12:13:15.747658377Z" level=info msg="API listen on /run/docker.sock" May 15 12:13:15.747873 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:13:16.931705 containerd[1575]: time="2025-05-15T12:13:16.931657871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 12:13:17.794220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747125.mount: Deactivated successfully. May 15 12:13:19.057252 containerd[1575]: time="2025-05-15T12:13:19.057187257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:19.057963 containerd[1575]: time="2025-05-15T12:13:19.057927325Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 12:13:19.059070 containerd[1575]: time="2025-05-15T12:13:19.059033550Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:19.061340 containerd[1575]: time="2025-05-15T12:13:19.061309609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:19.062229 containerd[1575]: time="2025-05-15T12:13:19.062183799Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.130483838s" May 15 12:13:19.062229 containerd[1575]: time="2025-05-15T12:13:19.062217182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 12:13:19.062707 containerd[1575]: time="2025-05-15T12:13:19.062683727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 12:13:20.577790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:13:20.579780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:20.777004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:20.781955 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:13:20.902236 kubelet[2081]: E0515 12:13:20.902085 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:13:20.908602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:13:20.908865 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:13:20.909271 systemd[1]: kubelet.service: Consumed 220ms CPU time, 104.7M memory peak. May 15 12:13:21.124294 containerd[1575]: time="2025-05-15T12:13:21.124165237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:21.125561 containerd[1575]: time="2025-05-15T12:13:21.125471968Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 12:13:21.127090 containerd[1575]: time="2025-05-15T12:13:21.127046241Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:21.129555 containerd[1575]: time="2025-05-15T12:13:21.129517536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:21.130585 containerd[1575]: time="2025-05-15T12:13:21.130527330Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.067815591s" May 15 12:13:21.130585 containerd[1575]: time="2025-05-15T12:13:21.130578697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 12:13:21.131318 containerd[1575]: time="2025-05-15T12:13:21.131280653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 12:13:23.216419 containerd[1575]: time="2025-05-15T12:13:23.216344076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:23.217188 containerd[1575]: time="2025-05-15T12:13:23.217149577Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 12:13:23.218428 containerd[1575]: time="2025-05-15T12:13:23.218385335Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:23.221089 containerd[1575]: time="2025-05-15T12:13:23.221051797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:23.221964 containerd[1575]: time="2025-05-15T12:13:23.221928090Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.090609997s" May 15 12:13:23.221964 containerd[1575]: time="2025-05-15T12:13:23.221954340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 12:13:23.222454 containerd[1575]: time="2025-05-15T12:13:23.222407139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 12:13:24.427398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186956986.mount: Deactivated successfully. May 15 12:13:25.323292 containerd[1575]: time="2025-05-15T12:13:25.323193053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:25.362018 containerd[1575]: time="2025-05-15T12:13:25.361936284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 12:13:25.391364 containerd[1575]: time="2025-05-15T12:13:25.391294852Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:25.454039 containerd[1575]: time="2025-05-15T12:13:25.453980553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:25.454546 containerd[1575]: time="2025-05-15T12:13:25.454504225Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.232064495s" May 15 12:13:25.454593 containerd[1575]: time="2025-05-15T12:13:25.454549901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 12:13:25.455150 containerd[1575]: time="2025-05-15T12:13:25.455112496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 12:13:25.984745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388089846.mount: Deactivated successfully. May 15 12:13:27.241396 containerd[1575]: time="2025-05-15T12:13:27.241326302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:27.242045 containerd[1575]: time="2025-05-15T12:13:27.242017499Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 12:13:27.243361 containerd[1575]: time="2025-05-15T12:13:27.243326624Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:27.246449 containerd[1575]: time="2025-05-15T12:13:27.246399067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:27.247548 containerd[1575]: time="2025-05-15T12:13:27.247492709Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.792342392s" May 15 12:13:27.247609 containerd[1575]: time="2025-05-15T12:13:27.247551760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 12:13:27.248143 containerd[1575]: time="2025-05-15T12:13:27.248059171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:13:27.679879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610599509.mount: Deactivated successfully. May 15 12:13:27.685233 containerd[1575]: time="2025-05-15T12:13:27.685185145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:13:27.685976 containerd[1575]: time="2025-05-15T12:13:27.685948627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:13:27.687124 containerd[1575]: time="2025-05-15T12:13:27.687078246Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:13:27.689179 containerd[1575]: time="2025-05-15T12:13:27.689138029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:13:27.689843 containerd[1575]: time="2025-05-15T12:13:27.689800041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 441.657884ms" May 15 12:13:27.689843 containerd[1575]: time="2025-05-15T12:13:27.689838052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 12:13:27.690443 containerd[1575]: time="2025-05-15T12:13:27.690295671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 12:13:28.651797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150396284.mount: Deactivated successfully. May 15 12:13:31.077780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 12:13:31.079710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:31.096207 containerd[1575]: time="2025-05-15T12:13:31.096169306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:31.097109 containerd[1575]: time="2025-05-15T12:13:31.096835636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 12:13:31.098266 containerd[1575]: time="2025-05-15T12:13:31.098240982Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:31.100975 containerd[1575]: time="2025-05-15T12:13:31.100937410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:31.101920 containerd[1575]: time="2025-05-15T12:13:31.101888564Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.411568968s" May 15 12:13:31.101975 containerd[1575]: time="2025-05-15T12:13:31.101921696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 12:13:31.250983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:31.255253 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:13:31.555075 kubelet[2229]: E0515 12:13:31.554889 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:13:31.558802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:13:31.558998 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:13:31.559404 systemd[1]: kubelet.service: Consumed 208ms CPU time, 104.2M memory peak. May 15 12:13:33.549684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:33.549855 systemd[1]: kubelet.service: Consumed 208ms CPU time, 104.2M memory peak. May 15 12:13:33.552115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:33.577797 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... May 15 12:13:33.577815 systemd[1]: Reloading... May 15 12:13:33.669714 zram_generator::config[2307]: No configuration found. May 15 12:13:33.840791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:13:33.966822 systemd[1]: Reloading finished in 388 ms. May 15 12:13:34.027506 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:13:34.027614 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:13:34.027919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:34.027962 systemd[1]: kubelet.service: Consumed 157ms CPU time, 91.8M memory peak. May 15 12:13:34.029606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:34.210546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:34.222144 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:13:34.263095 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:13:34.263095 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:13:34.263095 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:13:34.263492 kubelet[2349]: I0515 12:13:34.263158 2349 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:13:34.502765 kubelet[2349]: I0515 12:13:34.501676 2349 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:13:34.502765 kubelet[2349]: I0515 12:13:34.501719 2349 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:13:34.502765 kubelet[2349]: I0515 12:13:34.502233 2349 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:13:34.527858 kubelet[2349]: I0515 12:13:34.527812 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:13:34.527994 kubelet[2349]: E0515 12:13:34.527947 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:34.536679 kubelet[2349]: I0515 12:13:34.536651 2349 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:13:34.541775 kubelet[2349]: I0515 12:13:34.541749 2349 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:13:34.542883 kubelet[2349]: I0515 12:13:34.542839 2349 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:13:34.543029 kubelet[2349]: I0515 12:13:34.542869 2349 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:13:34.543029 kubelet[2349]: I0515 12:13:34.543027 2349 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:13:34.543166 kubelet[2349]: I0515 12:13:34.543034 2349 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:13:34.543166 kubelet[2349]: I0515 12:13:34.543151 2349 state_mem.go:36] "Initialized new in-memory state store" May 15 12:13:34.546105 kubelet[2349]: I0515 12:13:34.546082 2349 kubelet.go:446] "Attempting to sync node with API server" May 15 12:13:34.546105 kubelet[2349]: I0515 12:13:34.546101 2349 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:13:34.546170 kubelet[2349]: I0515 12:13:34.546130 2349 kubelet.go:352] "Adding apiserver pod source" May 15 12:13:34.546170 kubelet[2349]: I0515 12:13:34.546141 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:13:34.550840 kubelet[2349]: W0515 12:13:34.549429 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:34.550840 kubelet[2349]: E0515 12:13:34.549485 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:34.550840 kubelet[2349]: W0515 12:13:34.549735 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:34.550840 kubelet[2349]: E0515 12:13:34.549796 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:34.550968 kubelet[2349]: I0515 12:13:34.550938 2349 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:13:34.551722 kubelet[2349]: I0515 12:13:34.551691 2349 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:13:34.552242 kubelet[2349]: W0515 12:13:34.552206 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:13:34.554098 kubelet[2349]: I0515 12:13:34.554074 2349 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:13:34.554147 kubelet[2349]: I0515 12:13:34.554110 2349 server.go:1287] "Started kubelet" May 15 12:13:34.556660 kubelet[2349]: I0515 12:13:34.556503 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:13:34.556660 kubelet[2349]: I0515 12:13:34.556568 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:13:34.556913 kubelet[2349]: I0515 12:13:34.556892 2349 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:13:34.556982 kubelet[2349]: I0515 12:13:34.556960 2349 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:13:34.557851 kubelet[2349]: I0515 12:13:34.557821 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:13:34.558182 kubelet[2349]: I0515 12:13:34.558166 2349 server.go:490] "Adding debug handlers to kubelet server" May 15 12:13:34.560111 kubelet[2349]: E0515 12:13:34.559027 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fb24abf8d862d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 12:13:34.554089005 +0000 UTC m=+0.327406181,LastTimestamp:2025-05-15 12:13:34.554089005 +0000 UTC m=+0.327406181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 12:13:34.560111 kubelet[2349]: E0515 12:13:34.560084 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:34.560220 kubelet[2349]: I0515 12:13:34.560129 2349 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:13:34.560303 kubelet[2349]: I0515 12:13:34.560282 2349 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:13:34.560337 kubelet[2349]: I0515 12:13:34.560331 2349 reconciler.go:26] "Reconciler: start to sync state" May 15 12:13:34.560467 kubelet[2349]: E0515 12:13:34.560361 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" May 15 12:13:34.560744 kubelet[2349]: W0515 12:13:34.560705 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:34.560779 kubelet[2349]: E0515 12:13:34.560753 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:34.560802 kubelet[2349]: I0515 12:13:34.560784 2349 factory.go:221] Registration of the systemd container factory successfully May 15 12:13:34.560862 kubelet[2349]: E0515 12:13:34.560837 2349 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:13:34.560952 kubelet[2349]: I0515 12:13:34.560902 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:13:34.561841 kubelet[2349]: I0515 12:13:34.561797 2349 factory.go:221] Registration of the containerd container factory successfully May 15 12:13:34.611854 kubelet[2349]: I0515 12:13:34.611798 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:13:34.613256 kubelet[2349]: I0515 12:13:34.613228 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:13:34.613256 kubelet[2349]: I0515 12:13:34.613241 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:13:34.613256 kubelet[2349]: I0515 12:13:34.613257 2349 state_mem.go:36] "Initialized new in-memory state store" May 15 12:13:34.613424 kubelet[2349]: I0515 12:13:34.613209 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:13:34.613456 kubelet[2349]: I0515 12:13:34.613438 2349 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:13:34.613480 kubelet[2349]: I0515 12:13:34.613457 2349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:13:34.613480 kubelet[2349]: I0515 12:13:34.613464 2349 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:13:34.613695 kubelet[2349]: E0515 12:13:34.613652 2349 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:13:34.614949 kubelet[2349]: W0515 12:13:34.614711 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:34.615048 kubelet[2349]: E0515 12:13:34.615021 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:34.660431 kubelet[2349]: E0515 12:13:34.660391 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:34.714677 kubelet[2349]: E0515 12:13:34.714619 2349 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:13:34.761392 kubelet[2349]: E0515 12:13:34.761250 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:34.761850 kubelet[2349]: E0515 12:13:34.761700 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" May 15 12:13:34.862045 kubelet[2349]: E0515 12:13:34.861984 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:34.915285 kubelet[2349]: E0515 12:13:34.915202 2349 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 12:13:34.962669 kubelet[2349]: E0515 12:13:34.962607 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:35.027815 kubelet[2349]: I0515 12:13:35.027718 2349 policy_none.go:49] "None policy: Start" May 15 12:13:35.027815 kubelet[2349]: I0515 12:13:35.027749 2349 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:13:35.027815 kubelet[2349]: I0515 12:13:35.027763 2349 state_mem.go:35] "Initializing new in-memory state store" May 15 12:13:35.036146 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:13:35.048742 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:13:35.052072 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:13:35.061743 kubelet[2349]: I0515 12:13:35.061697 2349 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:13:35.062005 kubelet[2349]: I0515 12:13:35.061949 2349 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:13:35.062005 kubelet[2349]: I0515 12:13:35.061972 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:13:35.062233 kubelet[2349]: I0515 12:13:35.062171 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:13:35.062873 kubelet[2349]: E0515 12:13:35.062849 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:13:35.062934 kubelet[2349]: E0515 12:13:35.062882 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 12:13:35.162847 kubelet[2349]: E0515 12:13:35.162797 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" May 15 12:13:35.163498 kubelet[2349]: I0515 12:13:35.163442 2349 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 12:13:35.163843 kubelet[2349]: E0515 12:13:35.163810 2349 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 15 12:13:35.324779 systemd[1]: Created slice kubepods-burstable-podf65f54a7c45fde3f4528ca36265bbe77.slice - libcontainer container kubepods-burstable-podf65f54a7c45fde3f4528ca36265bbe77.slice. May 15 12:13:35.356946 kubelet[2349]: E0515 12:13:35.356903 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:35.360375 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 12:13:35.363599 kubelet[2349]: I0515 12:13:35.363566 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:35.363599 kubelet[2349]: I0515 12:13:35.363595 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:35.363789 kubelet[2349]: I0515 12:13:35.363618 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 12:13:35.363789 kubelet[2349]: I0515 12:13:35.363632 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:35.363789 kubelet[2349]: I0515 12:13:35.363663 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:35.363789 kubelet[2349]: I0515 12:13:35.363678 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:35.363789 kubelet[2349]: I0515 12:13:35.363695 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:35.363905 kubelet[2349]: I0515 12:13:35.363709 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:35.363905 kubelet[2349]: I0515 12:13:35.363761 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:35.365806 kubelet[2349]: I0515 12:13:35.365788 2349 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 12:13:35.366187 kubelet[2349]: E0515 12:13:35.366135 2349 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 15 12:13:35.384505 kubelet[2349]: E0515 12:13:35.384455 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:35.387281 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 12:13:35.389556 kubelet[2349]: E0515 12:13:35.389502 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:35.485818 kubelet[2349]: W0515 12:13:35.485749 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:35.485818 kubelet[2349]: E0515 12:13:35.485811 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:35.657805 kubelet[2349]: E0515 12:13:35.657727 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:35.658553 containerd[1575]: time="2025-05-15T12:13:35.658502429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f65f54a7c45fde3f4528ca36265bbe77,Namespace:kube-system,Attempt:0,}" May 15 12:13:35.685441 kubelet[2349]: E0515 12:13:35.685382 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:35.686037 containerd[1575]: time="2025-05-15T12:13:35.685996209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 12:13:35.690370 kubelet[2349]: E0515 12:13:35.690332 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:35.690914 containerd[1575]: time="2025-05-15T12:13:35.690861105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 12:13:35.767819 kubelet[2349]: I0515 12:13:35.767764 2349 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 12:13:35.768161 kubelet[2349]: E0515 12:13:35.768118 2349 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" May 15 12:13:35.861110 kubelet[2349]: W0515 12:13:35.860994 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:35.861110 kubelet[2349]: E0515 12:13:35.861107 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:35.903397 kubelet[2349]: W0515 12:13:35.903263 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:35.903397 kubelet[2349]: E0515 12:13:35.903349 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:35.932353 kubelet[2349]: W0515 12:13:35.932120 2349 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused May 15 12:13:35.932353 kubelet[2349]: E0515 12:13:35.932207 2349 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" May 15 12:13:35.964157 kubelet[2349]: E0515 12:13:35.964092 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" May 15 12:13:36.011424 containerd[1575]: time="2025-05-15T12:13:36.011354278Z" level=info msg="connecting to shim 5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f" address="unix:///run/containerd/s/a645aed6e555ed31decf668d3334252b53636e8689f1d183fcebc6d4a277888e" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:36.016508 containerd[1575]: time="2025-05-15T12:13:36.016299524Z" level=info msg="connecting to shim 657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296" address="unix:///run/containerd/s/59f292c69b087a072cae695497c624b10f1ade9d6a04c7e0302428c2d0515ce1" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:36.041006 containerd[1575]: time="2025-05-15T12:13:36.040895239Z" level=info msg="connecting to shim c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25" address="unix:///run/containerd/s/2e32bdcb5ef1c867e7dca5857d0b41cc540a352ae78c1887ad4a30722fff37af" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:36.066883 systemd[1]: Started cri-containerd-657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296.scope - libcontainer container 657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296. May 15 12:13:36.120935 systemd[1]: Started cri-containerd-5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f.scope - libcontainer container 5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f. May 15 12:13:36.127668 systemd[1]: Started cri-containerd-c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25.scope - libcontainer container c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25. May 15 12:13:36.187474 containerd[1575]: time="2025-05-15T12:13:36.186948493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296\"" May 15 12:13:36.188742 kubelet[2349]: E0515 12:13:36.188699 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:36.192065 containerd[1575]: time="2025-05-15T12:13:36.192029283Z" level=info msg="CreateContainer within sandbox \"657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:13:36.192578 containerd[1575]: time="2025-05-15T12:13:36.192526295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f65f54a7c45fde3f4528ca36265bbe77,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f\"" May 15 12:13:36.193370 kubelet[2349]: E0515 12:13:36.193331 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:36.195133 containerd[1575]: time="2025-05-15T12:13:36.195092859Z" level=info msg="CreateContainer within sandbox \"5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:13:36.196968 containerd[1575]: time="2025-05-15T12:13:36.196912613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25\"" May 15 12:13:36.197524 kubelet[2349]: E0515 12:13:36.197494 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:36.199186 containerd[1575]: time="2025-05-15T12:13:36.199029273Z" level=info msg="CreateContainer within sandbox \"c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:13:36.209451 containerd[1575]: time="2025-05-15T12:13:36.209400226Z" level=info msg="Container 5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e: CDI devices from CRI Config.CDIDevices: []" May 15 12:13:36.212177 containerd[1575]: time="2025-05-15T12:13:36.212122241Z" level=info msg="Container ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4: CDI devices from CRI Config.CDIDevices: []" May 15 12:13:36.216049 containerd[1575]: time="2025-05-15T12:13:36.216005565Z" level=info msg="Container 2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436: CDI devices from CRI Config.CDIDevices: []" May 15 12:13:36.228985 containerd[1575]: time="2025-05-15T12:13:36.228929637Z" level=info msg="CreateContainer within sandbox \"c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436\"" May 15 12:13:36.229707 containerd[1575]: time="2025-05-15T12:13:36.229663965Z" level=info msg="StartContainer for \"2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436\"" May 15 12:13:36.230723 containerd[1575]: time="2025-05-15T12:13:36.230688947Z" level=info msg="CreateContainer within sandbox \"5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e\"" May 15 12:13:36.230764 containerd[1575]: time="2025-05-15T12:13:36.230706120Z" level=info msg="connecting to shim 2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436" address="unix:///run/containerd/s/2e32bdcb5ef1c867e7dca5857d0b41cc540a352ae78c1887ad4a30722fff37af" protocol=ttrpc version=3 May 15 12:13:36.231143 containerd[1575]: time="2025-05-15T12:13:36.231088887Z" level=info msg="StartContainer for \"5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e\"" May 15 12:13:36.231998 containerd[1575]: time="2025-05-15T12:13:36.231969008Z" level=info msg="connecting to shim 5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e" address="unix:///run/containerd/s/a645aed6e555ed31decf668d3334252b53636e8689f1d183fcebc6d4a277888e" protocol=ttrpc version=3 May 15 12:13:36.234263 containerd[1575]: time="2025-05-15T12:13:36.234224790Z" level=info msg="CreateContainer within sandbox \"657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4\"" May 15 12:13:36.235708 containerd[1575]: time="2025-05-15T12:13:36.234536715Z" level=info msg="StartContainer for \"ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4\"" May 15 12:13:36.235708 containerd[1575]: time="2025-05-15T12:13:36.235496365Z" level=info msg="connecting to shim ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4" address="unix:///run/containerd/s/59f292c69b087a072cae695497c624b10f1ade9d6a04c7e0302428c2d0515ce1" protocol=ttrpc version=3 May 15 12:13:36.272891 systemd[1]: Started cri-containerd-5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e.scope - libcontainer container 5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e. May 15 12:13:36.284893 systemd[1]: Started cri-containerd-2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436.scope - libcontainer container 2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436. May 15 12:13:36.286818 systemd[1]: Started cri-containerd-ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4.scope - libcontainer container ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4. May 15 12:13:36.343576 containerd[1575]: time="2025-05-15T12:13:36.343479821Z" level=info msg="StartContainer for \"5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e\" returns successfully" May 15 12:13:36.353031 containerd[1575]: time="2025-05-15T12:13:36.352974189Z" level=info msg="StartContainer for \"2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436\" returns successfully" May 15 12:13:36.369690 containerd[1575]: time="2025-05-15T12:13:36.369585488Z" level=info msg="StartContainer for \"ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4\" returns successfully" May 15 12:13:36.570618 kubelet[2349]: I0515 12:13:36.570101 2349 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 12:13:36.625125 kubelet[2349]: E0515 12:13:36.625092 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:36.627665 kubelet[2349]: E0515 12:13:36.627196 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:36.628023 kubelet[2349]: E0515 12:13:36.627462 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:36.628181 kubelet[2349]: E0515 12:13:36.628163 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:36.630346 kubelet[2349]: E0515 12:13:36.630326 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:36.630509 kubelet[2349]: E0515 12:13:36.630473 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:37.631637 kubelet[2349]: E0515 12:13:37.631576 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:37.632062 kubelet[2349]: E0515 12:13:37.631755 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:37.632096 kubelet[2349]: E0515 12:13:37.632081 2349 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 12:13:37.632240 kubelet[2349]: E0515 12:13:37.632207 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:38.122934 kubelet[2349]: E0515 12:13:38.122890 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 12:13:38.336331 kubelet[2349]: I0515 12:13:38.336275 2349 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 12:13:38.336331 kubelet[2349]: E0515 12:13:38.336313 2349 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 12:13:38.341152 kubelet[2349]: E0515 12:13:38.341099 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:38.441881 kubelet[2349]: E0515 12:13:38.441717 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:38.542003 kubelet[2349]: E0515 12:13:38.541963 2349 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:38.550544 kubelet[2349]: I0515 12:13:38.550490 2349 apiserver.go:52] "Watching apiserver" May 15 12:13:38.560555 kubelet[2349]: I0515 12:13:38.560524 2349 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:13:38.560555 kubelet[2349]: I0515 12:13:38.560545 2349 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 12:13:38.569482 kubelet[2349]: E0515 12:13:38.569433 2349 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 12:13:38.569482 kubelet[2349]: I0515 12:13:38.569464 2349 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 12:13:38.571119 kubelet[2349]: E0515 12:13:38.571091 2349 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 12:13:38.571119 kubelet[2349]: I0515 12:13:38.571110 2349 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 12:13:38.573168 kubelet[2349]: E0515 12:13:38.573111 2349 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 12:13:39.244332 kubelet[2349]: I0515 12:13:39.244280 2349 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 12:13:39.249415 kubelet[2349]: E0515 12:13:39.249376 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:39.633941 kubelet[2349]: E0515 12:13:39.633904 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:40.882369 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-7.scope)... May 15 12:13:40.882389 systemd[1]: Reloading... May 15 12:13:40.981686 zram_generator::config[2671]: No configuration found. May 15 12:13:41.088346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:13:41.232250 systemd[1]: Reloading finished in 349 ms. May 15 12:13:41.268474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:41.284104 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:13:41.284489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:41.284556 systemd[1]: kubelet.service: Consumed 923ms CPU time, 126.1M memory peak. May 15 12:13:41.288933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:13:41.501868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:13:41.512063 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:13:41.602424 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:13:41.602424 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:13:41.602424 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:13:41.602881 kubelet[2713]: I0515 12:13:41.602508 2713 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:13:41.609217 kubelet[2713]: I0515 12:13:41.609171 2713 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:13:41.609217 kubelet[2713]: I0515 12:13:41.609203 2713 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:13:41.609539 kubelet[2713]: I0515 12:13:41.609513 2713 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:13:41.610694 kubelet[2713]: I0515 12:13:41.610664 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:13:41.613139 kubelet[2713]: I0515 12:13:41.613120 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:13:41.618830 kubelet[2713]: I0515 12:13:41.618782 2713 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:13:41.624740 kubelet[2713]: I0515 12:13:41.624708 2713 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:13:41.625151 kubelet[2713]: I0515 12:13:41.625099 2713 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:13:41.625332 kubelet[2713]: I0515 12:13:41.625137 2713 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:13:41.625332 kubelet[2713]: I0515 12:13:41.625327 2713 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:13:41.625537 kubelet[2713]: I0515 12:13:41.625338 2713 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:13:41.625537 kubelet[2713]: I0515 12:13:41.625394 2713 state_mem.go:36] "Initialized new in-memory state store" May 15 12:13:41.625631 kubelet[2713]: I0515 12:13:41.625614 2713 kubelet.go:446] "Attempting to sync node with API server" May 15 12:13:41.625689 kubelet[2713]: I0515 12:13:41.625632 2713 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:13:41.625689 kubelet[2713]: I0515 12:13:41.625675 2713 kubelet.go:352] "Adding apiserver pod source" May 15 12:13:41.625754 kubelet[2713]: I0515 12:13:41.625690 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:13:41.629669 kubelet[2713]: I0515 12:13:41.627280 2713 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:13:41.629669 kubelet[2713]: I0515 12:13:41.628024 2713 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:13:41.629669 kubelet[2713]: I0515 12:13:41.629322 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:13:41.629669 kubelet[2713]: I0515 12:13:41.629377 2713 server.go:1287] "Started kubelet" May 15 12:13:41.632377 kubelet[2713]: I0515 12:13:41.630909 2713 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:13:41.632377 kubelet[2713]: I0515 12:13:41.630318 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:13:41.632377 kubelet[2713]: I0515 12:13:41.631496 2713 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:13:41.632560 kubelet[2713]: I0515 12:13:41.632460 2713 server.go:490] "Adding debug handlers to kubelet server" May 15 12:13:41.633653 kubelet[2713]: I0515 12:13:41.633604 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:13:41.634625 kubelet[2713]: I0515 12:13:41.634593 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:13:41.638668 kubelet[2713]: I0515 12:13:41.637924 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:13:41.638668 kubelet[2713]: I0515 12:13:41.638057 2713 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:13:41.638668 kubelet[2713]: I0515 12:13:41.638191 2713 reconciler.go:26] "Reconciler: start to sync state" May 15 12:13:41.638668 kubelet[2713]: E0515 12:13:41.638450 2713 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 12:13:41.638668 kubelet[2713]: I0515 12:13:41.638623 2713 factory.go:221] Registration of the systemd container factory successfully May 15 12:13:41.639040 kubelet[2713]: E0515 12:13:41.639010 2713 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:13:41.639539 kubelet[2713]: I0515 12:13:41.639496 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:13:41.642621 kubelet[2713]: I0515 12:13:41.641173 2713 factory.go:221] Registration of the containerd container factory successfully May 15 12:13:41.661936 kubelet[2713]: I0515 12:13:41.661863 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:13:41.664957 kubelet[2713]: I0515 12:13:41.664938 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:13:41.665081 kubelet[2713]: I0515 12:13:41.665069 2713 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:13:41.665167 kubelet[2713]: I0515 12:13:41.665158 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:13:41.665218 kubelet[2713]: I0515 12:13:41.665208 2713 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:13:41.665359 kubelet[2713]: E0515 12:13:41.665340 2713 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695132 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695150 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695168 2713 state_mem.go:36] "Initialized new in-memory state store" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695325 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695351 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695398 2713 policy_none.go:49] "None policy: Start" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695409 2713 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:13:41.695719 kubelet[2713]: I0515 12:13:41.695420 2713 state_mem.go:35] "Initializing new in-memory state store" May 15 12:13:41.696695 kubelet[2713]: I0515 12:13:41.696637 2713 state_mem.go:75] "Updated machine memory state" May 15 12:13:41.704166 kubelet[2713]: I0515 12:13:41.704125 2713 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:13:41.704335 kubelet[2713]: I0515 12:13:41.704314 2713 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:13:41.704376 kubelet[2713]: I0515 12:13:41.704333 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:13:41.704567 kubelet[2713]: I0515 12:13:41.704546 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:13:41.706087 kubelet[2713]: E0515 12:13:41.705992 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:13:41.767788 kubelet[2713]: I0515 12:13:41.766501 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 12:13:41.767788 kubelet[2713]: I0515 12:13:41.766578 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 12:13:41.768089 kubelet[2713]: I0515 12:13:41.768068 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 12:13:41.777269 kubelet[2713]: E0515 12:13:41.777227 2713 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 12:13:41.811392 kubelet[2713]: I0515 12:13:41.811341 2713 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 12:13:41.822895 kubelet[2713]: I0515 12:13:41.822847 2713 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 12:13:41.823067 kubelet[2713]: I0515 12:13:41.822949 2713 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 12:13:41.839569 kubelet[2713]: I0515 12:13:41.839531 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:41.839569 kubelet[2713]: I0515 12:13:41.839569 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:41.839757 kubelet[2713]: I0515 12:13:41.839596 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:41.839757 kubelet[2713]: I0515 12:13:41.839615 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:41.839757 kubelet[2713]: I0515 12:13:41.839634 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 12:13:41.839757 kubelet[2713]: I0515 12:13:41.839661 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:41.839757 kubelet[2713]: I0515 12:13:41.839716 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:41.839872 kubelet[2713]: I0515 12:13:41.839772 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f65f54a7c45fde3f4528ca36265bbe77-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f65f54a7c45fde3f4528ca36265bbe77\") " pod="kube-system/kube-apiserver-localhost" May 15 12:13:41.839872 kubelet[2713]: I0515 12:13:41.839796 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 12:13:42.077232 kubelet[2713]: E0515 12:13:42.077182 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.077340 kubelet[2713]: E0515 12:13:42.077256 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.077505 kubelet[2713]: E0515 12:13:42.077484 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.627145 kubelet[2713]: I0515 12:13:42.627094 2713 apiserver.go:52] "Watching apiserver" May 15 12:13:42.638823 kubelet[2713]: I0515 12:13:42.638752 2713 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:13:42.680880 kubelet[2713]: I0515 12:13:42.680083 2713 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 12:13:42.680880 kubelet[2713]: E0515 12:13:42.680300 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.680880 kubelet[2713]: E0515 12:13:42.680369 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.693678 kubelet[2713]: E0515 12:13:42.693408 2713 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 12:13:42.693866 kubelet[2713]: E0515 12:13:42.693841 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:42.772499 kubelet[2713]: I0515 12:13:42.772265 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7722414720000002 podStartE2EDuration="1.772241472s" podCreationTimestamp="2025-05-15 12:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:13:42.761112451 +0000 UTC m=+1.244604264" watchObservedRunningTime="2025-05-15 12:13:42.772241472 +0000 UTC m=+1.255733275" May 15 12:13:42.773510 kubelet[2713]: I0515 12:13:42.773354 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.773327974 podStartE2EDuration="3.773327974s" podCreationTimestamp="2025-05-15 12:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:13:42.772510467 +0000 UTC m=+1.256002260" watchObservedRunningTime="2025-05-15 12:13:42.773327974 +0000 UTC m=+1.256819787" May 15 12:13:42.787062 kubelet[2713]: I0515 12:13:42.786982 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.786957244 podStartE2EDuration="1.786957244s" podCreationTimestamp="2025-05-15 12:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:13:42.786400318 +0000 UTC m=+1.269892111" watchObservedRunningTime="2025-05-15 12:13:42.786957244 +0000 UTC m=+1.270449037" May 15 12:13:43.682055 kubelet[2713]: E0515 12:13:43.682018 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:43.682551 kubelet[2713]: E0515 12:13:43.682262 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:45.944735 kubelet[2713]: I0515 12:13:45.944685 2713 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:13:45.945218 containerd[1575]: time="2025-05-15T12:13:45.945113848Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:13:45.945521 kubelet[2713]: I0515 12:13:45.945292 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:13:47.037081 systemd[1]: Created slice kubepods-besteffort-poded770a30_b443_4a06_a7eb_d179306f57be.slice - libcontainer container kubepods-besteffort-poded770a30_b443_4a06_a7eb_d179306f57be.slice. May 15 12:13:47.074871 kubelet[2713]: I0515 12:13:47.074823 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed770a30-b443-4a06-a7eb-d179306f57be-xtables-lock\") pod \"kube-proxy-bxkgn\" (UID: \"ed770a30-b443-4a06-a7eb-d179306f57be\") " pod="kube-system/kube-proxy-bxkgn" May 15 12:13:47.074871 kubelet[2713]: I0515 12:13:47.074869 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed770a30-b443-4a06-a7eb-d179306f57be-lib-modules\") pod \"kube-proxy-bxkgn\" (UID: \"ed770a30-b443-4a06-a7eb-d179306f57be\") " pod="kube-system/kube-proxy-bxkgn" May 15 12:13:47.075361 kubelet[2713]: I0515 12:13:47.074900 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftxx\" (UniqueName: \"kubernetes.io/projected/ed770a30-b443-4a06-a7eb-d179306f57be-kube-api-access-wftxx\") pod \"kube-proxy-bxkgn\" (UID: \"ed770a30-b443-4a06-a7eb-d179306f57be\") " pod="kube-system/kube-proxy-bxkgn" May 15 12:13:47.075361 kubelet[2713]: I0515 12:13:47.074924 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed770a30-b443-4a06-a7eb-d179306f57be-kube-proxy\") pod \"kube-proxy-bxkgn\" (UID: \"ed770a30-b443-4a06-a7eb-d179306f57be\") " pod="kube-system/kube-proxy-bxkgn" May 15 12:13:47.204826 kubelet[2713]: E0515 12:13:47.204707 2713 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 12:13:47.204826 kubelet[2713]: E0515 12:13:47.204760 2713 projected.go:194] Error preparing data for projected volume kube-api-access-wftxx for pod kube-system/kube-proxy-bxkgn: configmap "kube-root-ca.crt" not found May 15 12:13:47.205342 kubelet[2713]: E0515 12:13:47.205315 2713 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed770a30-b443-4a06-a7eb-d179306f57be-kube-api-access-wftxx podName:ed770a30-b443-4a06-a7eb-d179306f57be nodeName:}" failed. No retries permitted until 2025-05-15 12:13:47.70508017 +0000 UTC m=+6.188571963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wftxx" (UniqueName: "kubernetes.io/projected/ed770a30-b443-4a06-a7eb-d179306f57be-kube-api-access-wftxx") pod "kube-proxy-bxkgn" (UID: "ed770a30-b443-4a06-a7eb-d179306f57be") : configmap "kube-root-ca.crt" not found May 15 12:13:47.277714 sudo[1787]: pam_unix(sudo:session): session closed for user root May 15 12:13:47.279218 sshd[1786]: Connection closed by 10.0.0.1 port 41964 May 15 12:13:47.280055 sshd-session[1784]: pam_unix(sshd:session): session closed for user core May 15 12:13:47.284331 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:41964.service: Deactivated successfully. May 15 12:13:47.286427 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:13:47.286668 systemd[1]: session-7.scope: Consumed 4.617s CPU time, 224.9M memory peak. May 15 12:13:47.287859 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. May 15 12:13:47.289226 systemd-logind[1559]: Removed session 7. May 15 12:13:47.423183 systemd[1]: Created slice kubepods-besteffort-pode2fd2879_3fef_4f45_998c_49f637c1f718.slice - libcontainer container kubepods-besteffort-pode2fd2879_3fef_4f45_998c_49f637c1f718.slice. May 15 12:13:47.477334 kubelet[2713]: I0515 12:13:47.477263 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8kv\" (UniqueName: \"kubernetes.io/projected/e2fd2879-3fef-4f45-998c-49f637c1f718-kube-api-access-sw8kv\") pod \"tigera-operator-789496d6f5-2xg2q\" (UID: \"e2fd2879-3fef-4f45-998c-49f637c1f718\") " pod="tigera-operator/tigera-operator-789496d6f5-2xg2q" May 15 12:13:47.477334 kubelet[2713]: I0515 12:13:47.477312 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2fd2879-3fef-4f45-998c-49f637c1f718-var-lib-calico\") pod \"tigera-operator-789496d6f5-2xg2q\" (UID: \"e2fd2879-3fef-4f45-998c-49f637c1f718\") " pod="tigera-operator/tigera-operator-789496d6f5-2xg2q" May 15 12:13:47.726632 containerd[1575]: time="2025-05-15T12:13:47.726572255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-2xg2q,Uid:e2fd2879-3fef-4f45-998c-49f637c1f718,Namespace:tigera-operator,Attempt:0,}" May 15 12:13:47.929625 kubelet[2713]: E0515 12:13:47.929584 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:47.949819 kubelet[2713]: E0515 12:13:47.949780 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:47.950383 containerd[1575]: time="2025-05-15T12:13:47.950333135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxkgn,Uid:ed770a30-b443-4a06-a7eb-d179306f57be,Namespace:kube-system,Attempt:0,}" May 15 12:13:48.012746 containerd[1575]: time="2025-05-15T12:13:48.012547436Z" level=info msg="connecting to shim dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6" address="unix:///run/containerd/s/2c3c70f39f5343cf020c6d30a6ea6c95cf4569a9ddc0dfa9cc5754469c90ba74" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:48.013339 containerd[1575]: time="2025-05-15T12:13:48.013299418Z" level=info msg="connecting to shim 0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d" address="unix:///run/containerd/s/1199074412b401c2465694308bdda46f00c1cfb6d4dc4b404ecef9162f160a02" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:48.070986 systemd[1]: Started cri-containerd-0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d.scope - libcontainer container 0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d. May 15 12:13:48.073018 systemd[1]: Started cri-containerd-dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6.scope - libcontainer container dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6. May 15 12:13:48.144040 containerd[1575]: time="2025-05-15T12:13:48.143982721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxkgn,Uid:ed770a30-b443-4a06-a7eb-d179306f57be,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6\"" May 15 12:13:48.152054 kubelet[2713]: E0515 12:13:48.144876 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:48.153359 containerd[1575]: time="2025-05-15T12:13:48.153321181Z" level=info msg="CreateContainer within sandbox \"dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:13:48.161849 containerd[1575]: time="2025-05-15T12:13:48.161783284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-2xg2q,Uid:e2fd2879-3fef-4f45-998c-49f637c1f718,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d\"" May 15 12:13:48.164175 containerd[1575]: time="2025-05-15T12:13:48.164144397Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 12:13:48.184433 containerd[1575]: time="2025-05-15T12:13:48.184362611Z" level=info msg="Container ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2: CDI devices from CRI Config.CDIDevices: []" May 15 12:13:48.193532 containerd[1575]: time="2025-05-15T12:13:48.193461937Z" level=info msg="CreateContainer within sandbox \"dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2\"" May 15 12:13:48.194214 containerd[1575]: time="2025-05-15T12:13:48.194157571Z" level=info msg="StartContainer for \"ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2\"" May 15 12:13:48.195826 containerd[1575]: time="2025-05-15T12:13:48.195794685Z" level=info msg="connecting to shim ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2" address="unix:///run/containerd/s/2c3c70f39f5343cf020c6d30a6ea6c95cf4569a9ddc0dfa9cc5754469c90ba74" protocol=ttrpc version=3 May 15 12:13:48.222792 systemd[1]: Started cri-containerd-ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2.scope - libcontainer container ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2. May 15 12:13:48.270919 containerd[1575]: time="2025-05-15T12:13:48.270807110Z" level=info msg="StartContainer for \"ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2\" returns successfully" May 15 12:13:48.693436 kubelet[2713]: E0515 12:13:48.693400 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:48.695161 kubelet[2713]: E0515 12:13:48.695069 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:48.705057 kubelet[2713]: I0515 12:13:48.704991 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bxkgn" podStartSLOduration=2.704972802 podStartE2EDuration="2.704972802s" podCreationTimestamp="2025-05-15 12:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:13:48.704708519 +0000 UTC m=+7.188200332" watchObservedRunningTime="2025-05-15 12:13:48.704972802 +0000 UTC m=+7.188464585" May 15 12:13:49.696781 kubelet[2713]: E0515 12:13:49.696743 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:49.908134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174608479.mount: Deactivated successfully. May 15 12:13:50.328946 containerd[1575]: time="2025-05-15T12:13:50.328866351Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:50.342831 containerd[1575]: time="2025-05-15T12:13:50.342758591Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 12:13:50.362402 containerd[1575]: time="2025-05-15T12:13:50.362338477Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:50.378850 containerd[1575]: time="2025-05-15T12:13:50.378794697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:50.379272 containerd[1575]: time="2025-05-15T12:13:50.379212392Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.215031063s" May 15 12:13:50.379272 containerd[1575]: time="2025-05-15T12:13:50.379257397Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:13:50.381737 containerd[1575]: time="2025-05-15T12:13:50.381684107Z" level=info msg="CreateContainer within sandbox \"0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 12:13:50.565195 containerd[1575]: time="2025-05-15T12:13:50.565138646Z" level=info msg="Container 033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37: CDI devices from CRI Config.CDIDevices: []" May 15 12:13:50.736677 containerd[1575]: time="2025-05-15T12:13:50.736482829Z" level=info msg="CreateContainer within sandbox \"0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37\"" May 15 12:13:50.737556 containerd[1575]: time="2025-05-15T12:13:50.737422103Z" level=info msg="StartContainer for \"033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37\"" May 15 12:13:50.738587 containerd[1575]: time="2025-05-15T12:13:50.738560825Z" level=info msg="connecting to shim 033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37" address="unix:///run/containerd/s/1199074412b401c2465694308bdda46f00c1cfb6d4dc4b404ecef9162f160a02" protocol=ttrpc version=3 May 15 12:13:50.762892 systemd[1]: Started cri-containerd-033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37.scope - libcontainer container 033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37. May 15 12:13:50.846460 containerd[1575]: time="2025-05-15T12:13:50.846408258Z" level=info msg="StartContainer for \"033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37\" returns successfully" May 15 12:13:51.714288 kubelet[2713]: I0515 12:13:51.714225 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-2xg2q" podStartSLOduration=2.497497695 podStartE2EDuration="4.714137797s" podCreationTimestamp="2025-05-15 12:13:47 +0000 UTC" firstStartedPulling="2025-05-15 12:13:48.163628225 +0000 UTC m=+6.647120018" lastFinishedPulling="2025-05-15 12:13:50.380268327 +0000 UTC m=+8.863760120" observedRunningTime="2025-05-15 12:13:51.714093303 +0000 UTC m=+10.197585116" watchObservedRunningTime="2025-05-15 12:13:51.714137797 +0000 UTC m=+10.197629590" May 15 12:13:51.852899 kubelet[2713]: E0515 12:13:51.852785 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:51.993342 kubelet[2713]: E0515 12:13:51.993149 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:52.704845 kubelet[2713]: E0515 12:13:52.704771 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:52.705088 kubelet[2713]: E0515 12:13:52.704961 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:53.090828 update_engine[1562]: I20250515 12:13:53.090726 1562 update_attempter.cc:509] Updating boot flags... May 15 12:13:53.803534 systemd[1]: Created slice kubepods-besteffort-pod20e1172a_ba7d_4f78_b033_af82e705bcfa.slice - libcontainer container kubepods-besteffort-pod20e1172a_ba7d_4f78_b033_af82e705bcfa.slice. May 15 12:13:53.816202 kubelet[2713]: I0515 12:13:53.816059 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/20e1172a-ba7d-4f78-b033-af82e705bcfa-typha-certs\") pod \"calico-typha-8545565456-hxhbr\" (UID: \"20e1172a-ba7d-4f78-b033-af82e705bcfa\") " pod="calico-system/calico-typha-8545565456-hxhbr" May 15 12:13:53.816202 kubelet[2713]: I0515 12:13:53.816142 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2k9c\" (UniqueName: \"kubernetes.io/projected/20e1172a-ba7d-4f78-b033-af82e705bcfa-kube-api-access-m2k9c\") pod \"calico-typha-8545565456-hxhbr\" (UID: \"20e1172a-ba7d-4f78-b033-af82e705bcfa\") " pod="calico-system/calico-typha-8545565456-hxhbr" May 15 12:13:53.816202 kubelet[2713]: I0515 12:13:53.816171 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e1172a-ba7d-4f78-b033-af82e705bcfa-tigera-ca-bundle\") pod \"calico-typha-8545565456-hxhbr\" (UID: \"20e1172a-ba7d-4f78-b033-af82e705bcfa\") " pod="calico-system/calico-typha-8545565456-hxhbr" May 15 12:13:53.890822 systemd[1]: Created slice kubepods-besteffort-pod18d3ab40_ac8a_49b7_91e9_d50d8f3e07a1.slice - libcontainer container kubepods-besteffort-pod18d3ab40_ac8a_49b7_91e9_d50d8f3e07a1.slice. May 15 12:13:53.916851 kubelet[2713]: I0515 12:13:53.916809 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-xtables-lock\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917123 kubelet[2713]: I0515 12:13:53.917037 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-var-lib-calico\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917123 kubelet[2713]: I0515 12:13:53.917067 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-node-certs\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917271 kubelet[2713]: I0515 12:13:53.917139 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-lib-modules\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917271 kubelet[2713]: I0515 12:13:53.917187 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-cni-bin-dir\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917271 kubelet[2713]: I0515 12:13:53.917201 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-cni-net-dir\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917271 kubelet[2713]: I0515 12:13:53.917220 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-cni-log-dir\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917271 kubelet[2713]: I0515 12:13:53.917252 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-flexvol-driver-host\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917605 kubelet[2713]: I0515 12:13:53.917335 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-var-run-calico\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917605 kubelet[2713]: I0515 12:13:53.917361 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9nxq\" (UniqueName: \"kubernetes.io/projected/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-kube-api-access-k9nxq\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917605 kubelet[2713]: I0515 12:13:53.917382 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-tigera-ca-bundle\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:53.917605 kubelet[2713]: I0515 12:13:53.917414 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1-policysync\") pod \"calico-node-6x577\" (UID: \"18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1\") " pod="calico-system/calico-node-6x577" May 15 12:13:54.011753 kubelet[2713]: E0515 12:13:54.011182 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:13:54.020137 kubelet[2713]: E0515 12:13:54.019622 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.020137 kubelet[2713]: W0515 12:13:54.019775 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.020137 kubelet[2713]: E0515 12:13:54.019810 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.020721 kubelet[2713]: E0515 12:13:54.020701 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.020721 kubelet[2713]: W0515 12:13:54.020716 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.020826 kubelet[2713]: E0515 12:13:54.020736 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.021581 kubelet[2713]: E0515 12:13:54.021554 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.021581 kubelet[2713]: W0515 12:13:54.021570 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.021713 kubelet[2713]: E0515 12:13:54.021587 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.022508 kubelet[2713]: E0515 12:13:54.021974 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.022508 kubelet[2713]: W0515 12:13:54.022003 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.022508 kubelet[2713]: E0515 12:13:54.022074 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.022508 kubelet[2713]: E0515 12:13:54.022478 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.022508 kubelet[2713]: W0515 12:13:54.022487 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.022697 kubelet[2713]: E0515 12:13:54.022540 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.023116 kubelet[2713]: E0515 12:13:54.023066 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.023116 kubelet[2713]: W0515 12:13:54.023091 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.023371 kubelet[2713]: E0515 12:13:54.023340 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.023742 kubelet[2713]: E0515 12:13:54.023710 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.023742 kubelet[2713]: W0515 12:13:54.023728 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.024196 kubelet[2713]: E0515 12:13:54.024085 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.024805 kubelet[2713]: E0515 12:13:54.024786 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.024890 kubelet[2713]: W0515 12:13:54.024874 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.025109 kubelet[2713]: E0515 12:13:54.025046 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.025481 kubelet[2713]: E0515 12:13:54.025440 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.025481 kubelet[2713]: W0515 12:13:54.025464 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.026134 kubelet[2713]: E0515 12:13:54.026071 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.026345 kubelet[2713]: E0515 12:13:54.026330 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.026482 kubelet[2713]: W0515 12:13:54.026400 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.027717 kubelet[2713]: E0515 12:13:54.027173 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.028411 kubelet[2713]: E0515 12:13:54.028365 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.028411 kubelet[2713]: W0515 12:13:54.028383 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.028658 kubelet[2713]: E0515 12:13:54.028616 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.028993 kubelet[2713]: E0515 12:13:54.028964 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.028993 kubelet[2713]: W0515 12:13:54.028977 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.029341 kubelet[2713]: E0515 12:13:54.029277 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.029457 kubelet[2713]: E0515 12:13:54.029439 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.029606 kubelet[2713]: W0515 12:13:54.029590 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.029733 kubelet[2713]: E0515 12:13:54.029674 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.030441 kubelet[2713]: E0515 12:13:54.030428 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.030514 kubelet[2713]: W0515 12:13:54.030504 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.030577 kubelet[2713]: E0515 12:13:54.030567 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.031118 kubelet[2713]: E0515 12:13:54.030940 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.031196 kubelet[2713]: W0515 12:13:54.031183 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.031261 kubelet[2713]: E0515 12:13:54.031251 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.032165 kubelet[2713]: E0515 12:13:54.032140 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.032165 kubelet[2713]: W0515 12:13:54.032152 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.032308 kubelet[2713]: E0515 12:13:54.032244 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.032932 kubelet[2713]: E0515 12:13:54.032912 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.033000 kubelet[2713]: W0515 12:13:54.032989 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.033048 kubelet[2713]: E0515 12:13:54.033037 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.040957 kubelet[2713]: E0515 12:13:54.040860 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.040957 kubelet[2713]: W0515 12:13:54.040884 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.040957 kubelet[2713]: E0515 12:13:54.040910 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.109026 kubelet[2713]: E0515 12:13:54.108666 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:54.109828 containerd[1575]: time="2025-05-15T12:13:54.109535842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8545565456-hxhbr,Uid:20e1172a-ba7d-4f78-b033-af82e705bcfa,Namespace:calico-system,Attempt:0,}" May 15 12:13:54.110767 kubelet[2713]: E0515 12:13:54.110733 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.110767 kubelet[2713]: W0515 12:13:54.110753 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.110767 kubelet[2713]: E0515 12:13:54.110775 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.111169 kubelet[2713]: E0515 12:13:54.111136 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.111568 kubelet[2713]: W0515 12:13:54.111543 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.111568 kubelet[2713]: E0515 12:13:54.111566 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.112058 kubelet[2713]: E0515 12:13:54.111997 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.112058 kubelet[2713]: W0515 12:13:54.112015 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.112147 kubelet[2713]: E0515 12:13:54.112120 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.112522 kubelet[2713]: E0515 12:13:54.112410 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.112522 kubelet[2713]: W0515 12:13:54.112421 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.112522 kubelet[2713]: E0515 12:13:54.112435 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.112695 kubelet[2713]: E0515 12:13:54.112660 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.112695 kubelet[2713]: W0515 12:13:54.112677 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.112695 kubelet[2713]: E0515 12:13:54.112687 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.112876 kubelet[2713]: E0515 12:13:54.112856 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.112876 kubelet[2713]: W0515 12:13:54.112871 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.112947 kubelet[2713]: E0515 12:13:54.112881 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.113347 kubelet[2713]: E0515 12:13:54.113123 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.113347 kubelet[2713]: W0515 12:13:54.113153 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.113347 kubelet[2713]: E0515 12:13:54.113184 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.113627 kubelet[2713]: E0515 12:13:54.113572 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.113851 kubelet[2713]: W0515 12:13:54.113683 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.113851 kubelet[2713]: E0515 12:13:54.113698 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.114464 kubelet[2713]: E0515 12:13:54.113954 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.114564 kubelet[2713]: W0515 12:13:54.114524 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.114564 kubelet[2713]: E0515 12:13:54.114547 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.114924 kubelet[2713]: E0515 12:13:54.114893 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.114924 kubelet[2713]: W0515 12:13:54.114908 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.114924 kubelet[2713]: E0515 12:13:54.114921 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.115240 kubelet[2713]: E0515 12:13:54.115209 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.115240 kubelet[2713]: W0515 12:13:54.115223 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.115240 kubelet[2713]: E0515 12:13:54.115234 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.115527 kubelet[2713]: E0515 12:13:54.115498 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.115527 kubelet[2713]: W0515 12:13:54.115517 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.115527 kubelet[2713]: E0515 12:13:54.115529 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.116297 kubelet[2713]: E0515 12:13:54.116265 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.116297 kubelet[2713]: W0515 12:13:54.116293 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.116416 kubelet[2713]: E0515 12:13:54.116320 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.116605 kubelet[2713]: E0515 12:13:54.116565 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.116605 kubelet[2713]: W0515 12:13:54.116578 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.116605 kubelet[2713]: E0515 12:13:54.116586 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.117146 kubelet[2713]: E0515 12:13:54.116816 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.117146 kubelet[2713]: W0515 12:13:54.116824 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.117146 kubelet[2713]: E0515 12:13:54.116836 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.117146 kubelet[2713]: E0515 12:13:54.117011 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.117146 kubelet[2713]: W0515 12:13:54.117018 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.117146 kubelet[2713]: E0515 12:13:54.117025 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.117335 kubelet[2713]: E0515 12:13:54.117196 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.117335 kubelet[2713]: W0515 12:13:54.117203 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.117335 kubelet[2713]: E0515 12:13:54.117210 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.117426 kubelet[2713]: E0515 12:13:54.117356 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.117426 kubelet[2713]: W0515 12:13:54.117363 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.117426 kubelet[2713]: E0515 12:13:54.117371 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.117571 kubelet[2713]: E0515 12:13:54.117550 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.117571 kubelet[2713]: W0515 12:13:54.117565 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.117571 kubelet[2713]: E0515 12:13:54.117578 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.118072 kubelet[2713]: E0515 12:13:54.117959 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.118072 kubelet[2713]: W0515 12:13:54.117971 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.118072 kubelet[2713]: E0515 12:13:54.117982 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.118335 kubelet[2713]: E0515 12:13:54.118315 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.118384 kubelet[2713]: W0515 12:13:54.118359 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.118384 kubelet[2713]: E0515 12:13:54.118372 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.118464 kubelet[2713]: I0515 12:13:54.118399 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/90d37edc-6839-41f5-b163-f0eaf370a904-varrun\") pod \"csi-node-driver-c7xqh\" (UID: \"90d37edc-6839-41f5-b163-f0eaf370a904\") " pod="calico-system/csi-node-driver-c7xqh" May 15 12:13:54.118599 kubelet[2713]: E0515 12:13:54.118579 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.118599 kubelet[2713]: W0515 12:13:54.118591 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.118726 kubelet[2713]: E0515 12:13:54.118604 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.118726 kubelet[2713]: I0515 12:13:54.118618 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90d37edc-6839-41f5-b163-f0eaf370a904-registration-dir\") pod \"csi-node-driver-c7xqh\" (UID: \"90d37edc-6839-41f5-b163-f0eaf370a904\") " pod="calico-system/csi-node-driver-c7xqh" May 15 12:13:54.118879 kubelet[2713]: E0515 12:13:54.118861 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.118879 kubelet[2713]: W0515 12:13:54.118872 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.118957 kubelet[2713]: E0515 12:13:54.118887 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.119058 kubelet[2713]: E0515 12:13:54.119043 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.119058 kubelet[2713]: W0515 12:13:54.119053 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.119123 kubelet[2713]: E0515 12:13:54.119064 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.119311 kubelet[2713]: E0515 12:13:54.119292 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.119311 kubelet[2713]: W0515 12:13:54.119305 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.119476 kubelet[2713]: E0515 12:13:54.119317 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.119476 kubelet[2713]: I0515 12:13:54.119333 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt528\" (UniqueName: \"kubernetes.io/projected/90d37edc-6839-41f5-b163-f0eaf370a904-kube-api-access-tt528\") pod \"csi-node-driver-c7xqh\" (UID: \"90d37edc-6839-41f5-b163-f0eaf370a904\") " pod="calico-system/csi-node-driver-c7xqh" May 15 12:13:54.119607 kubelet[2713]: E0515 12:13:54.119588 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.119607 kubelet[2713]: W0515 12:13:54.119600 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.119706 kubelet[2713]: E0515 12:13:54.119612 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.119706 kubelet[2713]: I0515 12:13:54.119626 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90d37edc-6839-41f5-b163-f0eaf370a904-socket-dir\") pod \"csi-node-driver-c7xqh\" (UID: \"90d37edc-6839-41f5-b163-f0eaf370a904\") " pod="calico-system/csi-node-driver-c7xqh" May 15 12:13:54.119883 kubelet[2713]: E0515 12:13:54.119859 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.119883 kubelet[2713]: W0515 12:13:54.119876 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.119996 kubelet[2713]: E0515 12:13:54.119961 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.119996 kubelet[2713]: I0515 12:13:54.119990 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90d37edc-6839-41f5-b163-f0eaf370a904-kubelet-dir\") pod \"csi-node-driver-c7xqh\" (UID: \"90d37edc-6839-41f5-b163-f0eaf370a904\") " pod="calico-system/csi-node-driver-c7xqh" May 15 12:13:54.120123 kubelet[2713]: E0515 12:13:54.120064 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.120123 kubelet[2713]: W0515 12:13:54.120075 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.120241 kubelet[2713]: E0515 12:13:54.120125 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.120331 kubelet[2713]: E0515 12:13:54.120313 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.120331 kubelet[2713]: W0515 12:13:54.120326 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.120396 kubelet[2713]: E0515 12:13:54.120344 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.120589 kubelet[2713]: E0515 12:13:54.120568 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.120589 kubelet[2713]: W0515 12:13:54.120583 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.120731 kubelet[2713]: E0515 12:13:54.120607 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.120858 kubelet[2713]: E0515 12:13:54.120840 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.120858 kubelet[2713]: W0515 12:13:54.120852 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.120937 kubelet[2713]: E0515 12:13:54.120875 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.121066 kubelet[2713]: E0515 12:13:54.121046 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.121066 kubelet[2713]: W0515 12:13:54.121057 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.121066 kubelet[2713]: E0515 12:13:54.121067 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.121349 kubelet[2713]: E0515 12:13:54.121333 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.121349 kubelet[2713]: W0515 12:13:54.121344 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.121398 kubelet[2713]: E0515 12:13:54.121354 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.121977 kubelet[2713]: E0515 12:13:54.121615 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.121977 kubelet[2713]: W0515 12:13:54.121712 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.121977 kubelet[2713]: E0515 12:13:54.121727 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.121977 kubelet[2713]: E0515 12:13:54.121961 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.122137 kubelet[2713]: W0515 12:13:54.121986 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.122137 kubelet[2713]: E0515 12:13:54.122010 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.140616 containerd[1575]: time="2025-05-15T12:13:54.140558636Z" level=info msg="connecting to shim 62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577" address="unix:///run/containerd/s/3b3bbc899672287a2a366ecca84d5d1f8e56507bfd3d03a1f12c2b34cbcab444" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:54.172897 systemd[1]: Started cri-containerd-62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577.scope - libcontainer container 62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577. May 15 12:13:54.196147 kubelet[2713]: E0515 12:13:54.196059 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:54.197205 containerd[1575]: time="2025-05-15T12:13:54.197158166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6x577,Uid:18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1,Namespace:calico-system,Attempt:0,}" May 15 12:13:54.221414 kubelet[2713]: E0515 12:13:54.221296 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.221414 kubelet[2713]: W0515 12:13:54.221325 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.221414 kubelet[2713]: E0515 12:13:54.221376 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.221901 kubelet[2713]: E0515 12:13:54.221824 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.221901 kubelet[2713]: W0515 12:13:54.221834 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.221901 kubelet[2713]: E0515 12:13:54.221860 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.222417 kubelet[2713]: E0515 12:13:54.222372 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.222417 kubelet[2713]: W0515 12:13:54.222389 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.222417 kubelet[2713]: E0515 12:13:54.222410 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.222742 kubelet[2713]: E0515 12:13:54.222720 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.222742 kubelet[2713]: W0515 12:13:54.222735 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.222742 kubelet[2713]: E0515 12:13:54.222754 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.223099 kubelet[2713]: E0515 12:13:54.223032 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.223099 kubelet[2713]: W0515 12:13:54.223043 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.223099 kubelet[2713]: E0515 12:13:54.223090 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.223443 kubelet[2713]: E0515 12:13:54.223276 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.223443 kubelet[2713]: W0515 12:13:54.223283 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.223556 kubelet[2713]: E0515 12:13:54.223496 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.223617 kubelet[2713]: E0515 12:13:54.223582 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.223617 kubelet[2713]: W0515 12:13:54.223589 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.223734 kubelet[2713]: E0515 12:13:54.223629 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.223936 kubelet[2713]: E0515 12:13:54.223804 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.223936 kubelet[2713]: W0515 12:13:54.223821 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.223936 kubelet[2713]: E0515 12:13:54.223870 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.224072 kubelet[2713]: E0515 12:13:54.224050 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.224072 kubelet[2713]: W0515 12:13:54.224057 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.224072 kubelet[2713]: E0515 12:13:54.224068 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.224325 kubelet[2713]: E0515 12:13:54.224240 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.224325 kubelet[2713]: W0515 12:13:54.224248 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.224527 kubelet[2713]: E0515 12:13:54.224329 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.224527 kubelet[2713]: E0515 12:13:54.224414 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.224527 kubelet[2713]: W0515 12:13:54.224420 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.224527 kubelet[2713]: E0515 12:13:54.224472 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.224837 kubelet[2713]: E0515 12:13:54.224635 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.224837 kubelet[2713]: W0515 12:13:54.224655 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.224837 kubelet[2713]: E0515 12:13:54.224673 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.224975 kubelet[2713]: E0515 12:13:54.224886 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.224975 kubelet[2713]: W0515 12:13:54.224893 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.224975 kubelet[2713]: E0515 12:13:54.224935 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.225216 kubelet[2713]: E0515 12:13:54.225102 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.225216 kubelet[2713]: W0515 12:13:54.225110 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.225216 kubelet[2713]: E0515 12:13:54.225152 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.225463 kubelet[2713]: E0515 12:13:54.225309 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.225463 kubelet[2713]: W0515 12:13:54.225319 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.225463 kubelet[2713]: E0515 12:13:54.225380 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.225619 kubelet[2713]: E0515 12:13:54.225555 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.225619 kubelet[2713]: W0515 12:13:54.225562 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.225619 kubelet[2713]: E0515 12:13:54.225610 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.225877 kubelet[2713]: E0515 12:13:54.225794 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.225877 kubelet[2713]: W0515 12:13:54.225800 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.226019 kubelet[2713]: E0515 12:13:54.225956 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.226019 kubelet[2713]: E0515 12:13:54.226018 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.226099 kubelet[2713]: W0515 12:13:54.226025 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.226099 kubelet[2713]: E0515 12:13:54.226033 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.226748 kubelet[2713]: E0515 12:13:54.226694 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.226802 kubelet[2713]: W0515 12:13:54.226715 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.227624 kubelet[2713]: E0515 12:13:54.227576 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.227927 kubelet[2713]: E0515 12:13:54.227904 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.228095 kubelet[2713]: W0515 12:13:54.227920 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.228095 kubelet[2713]: E0515 12:13:54.227961 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.228508 kubelet[2713]: E0515 12:13:54.228470 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.228508 kubelet[2713]: W0515 12:13:54.228485 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.228596 kubelet[2713]: E0515 12:13:54.228530 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.228777 kubelet[2713]: E0515 12:13:54.228734 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.228777 kubelet[2713]: W0515 12:13:54.228773 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.228900 kubelet[2713]: E0515 12:13:54.228824 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.229133 kubelet[2713]: E0515 12:13:54.229112 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.229133 kubelet[2713]: W0515 12:13:54.229126 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.229219 kubelet[2713]: E0515 12:13:54.229162 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.229515 kubelet[2713]: E0515 12:13:54.229487 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.229515 kubelet[2713]: W0515 12:13:54.229511 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.229674 kubelet[2713]: E0515 12:13:54.229637 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.229920 kubelet[2713]: E0515 12:13:54.229877 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.229920 kubelet[2713]: W0515 12:13:54.229899 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.229920 kubelet[2713]: E0515 12:13:54.229910 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.265681 kubelet[2713]: E0515 12:13:54.265634 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:13:54.265928 kubelet[2713]: W0515 12:13:54.265839 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:13:54.265928 kubelet[2713]: E0515 12:13:54.265861 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:13:54.277174 containerd[1575]: time="2025-05-15T12:13:54.277113538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8545565456-hxhbr,Uid:20e1172a-ba7d-4f78-b033-af82e705bcfa,Namespace:calico-system,Attempt:0,} returns sandbox id \"62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577\"" May 15 12:13:54.278824 kubelet[2713]: E0515 12:13:54.278800 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:54.279603 containerd[1575]: time="2025-05-15T12:13:54.279569539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 12:13:54.288788 containerd[1575]: time="2025-05-15T12:13:54.288726996Z" level=info msg="connecting to shim 00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e" address="unix:///run/containerd/s/df0fd79993785c93a2d99eec73b7b976c5af9124a0c16b9e5ab681c427e5a018" namespace=k8s.io protocol=ttrpc version=3 May 15 12:13:54.322974 systemd[1]: Started cri-containerd-00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e.scope - libcontainer container 00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e. May 15 12:13:54.356975 containerd[1575]: time="2025-05-15T12:13:54.356907173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6x577,Uid:18d3ab40-ac8a-49b7-91e9-d50d8f3e07a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\"" May 15 12:13:54.357753 kubelet[2713]: E0515 12:13:54.357704 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:13:55.668219 kubelet[2713]: E0515 12:13:55.668154 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:13:57.666208 kubelet[2713]: E0515 12:13:57.666137 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:13:58.880852 containerd[1575]: time="2025-05-15T12:13:58.880787899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:58.899961 containerd[1575]: time="2025-05-15T12:13:58.899892606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 12:13:59.024604 containerd[1575]: time="2025-05-15T12:13:59.024520461Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:59.166063 containerd[1575]: time="2025-05-15T12:13:59.165869824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:13:59.166576 containerd[1575]: time="2025-05-15T12:13:59.166377023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 4.886774721s" May 15 12:13:59.166576 containerd[1575]: time="2025-05-15T12:13:59.166429182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 12:13:59.168211 containerd[1575]: time="2025-05-15T12:13:59.168172985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 12:13:59.269535 containerd[1575]: time="2025-05-15T12:13:59.269482840Z" level=info msg="CreateContainer within sandbox \"62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:13:59.666510 kubelet[2713]: E0515 12:13:59.666441 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:13:59.966055 containerd[1575]: time="2025-05-15T12:13:59.965944868Z" level=info msg="Container 86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:00.503396 containerd[1575]: time="2025-05-15T12:14:00.503338854Z" level=info msg="CreateContainer within sandbox \"62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938\"" May 15 12:14:00.503833 containerd[1575]: time="2025-05-15T12:14:00.503809403Z" level=info msg="StartContainer for \"86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938\"" May 15 12:14:00.505218 containerd[1575]: time="2025-05-15T12:14:00.505181984Z" level=info msg="connecting to shim 86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938" address="unix:///run/containerd/s/3b3bbc899672287a2a366ecca84d5d1f8e56507bfd3d03a1f12c2b34cbcab444" protocol=ttrpc version=3 May 15 12:14:00.530925 systemd[1]: Started cri-containerd-86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938.scope - libcontainer container 86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938. May 15 12:14:00.791723 containerd[1575]: time="2025-05-15T12:14:00.791585870Z" level=info msg="StartContainer for \"86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938\" returns successfully" May 15 12:14:01.669484 kubelet[2713]: E0515 12:14:01.669422 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:01.794570 kubelet[2713]: E0515 12:14:01.794521 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:01.867774 kubelet[2713]: E0515 12:14:01.867723 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.867774 kubelet[2713]: W0515 12:14:01.867759 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.867774 kubelet[2713]: E0515 12:14:01.867785 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.868011 kubelet[2713]: E0515 12:14:01.867976 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.868011 kubelet[2713]: W0515 12:14:01.867989 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.868011 kubelet[2713]: E0515 12:14:01.868006 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.868229 kubelet[2713]: E0515 12:14:01.868210 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.868229 kubelet[2713]: W0515 12:14:01.868223 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.868302 kubelet[2713]: E0515 12:14:01.868232 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.868463 kubelet[2713]: E0515 12:14:01.868448 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.868463 kubelet[2713]: W0515 12:14:01.868457 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.868531 kubelet[2713]: E0515 12:14:01.868466 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.868681 kubelet[2713]: E0515 12:14:01.868660 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.868681 kubelet[2713]: W0515 12:14:01.868667 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.868681 kubelet[2713]: E0515 12:14:01.868676 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.868851 kubelet[2713]: E0515 12:14:01.868837 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.868851 kubelet[2713]: W0515 12:14:01.868846 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.868923 kubelet[2713]: E0515 12:14:01.868854 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869024 kubelet[2713]: E0515 12:14:01.869009 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869024 kubelet[2713]: W0515 12:14:01.869022 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.869093 kubelet[2713]: E0515 12:14:01.869031 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869207 kubelet[2713]: E0515 12:14:01.869192 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869207 kubelet[2713]: W0515 12:14:01.869201 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.869275 kubelet[2713]: E0515 12:14:01.869210 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869449 kubelet[2713]: E0515 12:14:01.869434 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869449 kubelet[2713]: W0515 12:14:01.869444 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.869525 kubelet[2713]: E0515 12:14:01.869453 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869619 kubelet[2713]: E0515 12:14:01.869607 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869619 kubelet[2713]: W0515 12:14:01.869617 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.869710 kubelet[2713]: E0515 12:14:01.869625 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869811 kubelet[2713]: E0515 12:14:01.869798 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869811 kubelet[2713]: W0515 12:14:01.869808 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.869884 kubelet[2713]: E0515 12:14:01.869817 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.869978 kubelet[2713]: E0515 12:14:01.869966 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.869978 kubelet[2713]: W0515 12:14:01.869976 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.870045 kubelet[2713]: E0515 12:14:01.869984 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.870150 kubelet[2713]: E0515 12:14:01.870138 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.870150 kubelet[2713]: W0515 12:14:01.870148 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.870214 kubelet[2713]: E0515 12:14:01.870156 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.870317 kubelet[2713]: E0515 12:14:01.870305 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.870317 kubelet[2713]: W0515 12:14:01.870314 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.870393 kubelet[2713]: E0515 12:14:01.870323 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.870493 kubelet[2713]: E0515 12:14:01.870480 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.870493 kubelet[2713]: W0515 12:14:01.870490 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.870562 kubelet[2713]: E0515 12:14:01.870498 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.875955 kubelet[2713]: E0515 12:14:01.875922 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.875955 kubelet[2713]: W0515 12:14:01.875942 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.875955 kubelet[2713]: E0515 12:14:01.875956 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.876179 kubelet[2713]: E0515 12:14:01.876160 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.876179 kubelet[2713]: W0515 12:14:01.876175 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.876278 kubelet[2713]: E0515 12:14:01.876192 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.876428 kubelet[2713]: E0515 12:14:01.876402 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.876428 kubelet[2713]: W0515 12:14:01.876420 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.876498 kubelet[2713]: E0515 12:14:01.876439 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.876658 kubelet[2713]: E0515 12:14:01.876627 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.876658 kubelet[2713]: W0515 12:14:01.876656 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.876742 kubelet[2713]: E0515 12:14:01.876672 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.876949 kubelet[2713]: E0515 12:14:01.876928 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.876949 kubelet[2713]: W0515 12:14:01.876940 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.877018 kubelet[2713]: E0515 12:14:01.876954 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.877187 kubelet[2713]: E0515 12:14:01.877174 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.877187 kubelet[2713]: W0515 12:14:01.877185 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.877264 kubelet[2713]: E0515 12:14:01.877200 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.877486 kubelet[2713]: E0515 12:14:01.877471 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.877486 kubelet[2713]: W0515 12:14:01.877487 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.877572 kubelet[2713]: E0515 12:14:01.877505 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.877722 kubelet[2713]: E0515 12:14:01.877707 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.877722 kubelet[2713]: W0515 12:14:01.877719 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.877807 kubelet[2713]: E0515 12:14:01.877737 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.878238 kubelet[2713]: E0515 12:14:01.878189 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.878296 kubelet[2713]: W0515 12:14:01.878232 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.878296 kubelet[2713]: E0515 12:14:01.878271 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.878533 kubelet[2713]: E0515 12:14:01.878511 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.878533 kubelet[2713]: W0515 12:14:01.878525 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.878612 kubelet[2713]: E0515 12:14:01.878545 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.878804 kubelet[2713]: E0515 12:14:01.878787 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.878804 kubelet[2713]: W0515 12:14:01.878803 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.878881 kubelet[2713]: E0515 12:14:01.878821 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.879047 kubelet[2713]: E0515 12:14:01.879028 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.879047 kubelet[2713]: W0515 12:14:01.879041 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.879145 kubelet[2713]: E0515 12:14:01.879081 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.879299 kubelet[2713]: E0515 12:14:01.879279 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.879299 kubelet[2713]: W0515 12:14:01.879293 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.879382 kubelet[2713]: E0515 12:14:01.879310 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.879558 kubelet[2713]: E0515 12:14:01.879539 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.879558 kubelet[2713]: W0515 12:14:01.879553 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.879629 kubelet[2713]: E0515 12:14:01.879573 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.879856 kubelet[2713]: E0515 12:14:01.879836 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.879856 kubelet[2713]: W0515 12:14:01.879850 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.879934 kubelet[2713]: E0515 12:14:01.879867 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.880113 kubelet[2713]: E0515 12:14:01.880096 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.880113 kubelet[2713]: W0515 12:14:01.880109 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.880188 kubelet[2713]: E0515 12:14:01.880131 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.880343 kubelet[2713]: E0515 12:14:01.880323 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.880343 kubelet[2713]: W0515 12:14:01.880336 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.880343 kubelet[2713]: E0515 12:14:01.880346 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:01.880708 kubelet[2713]: E0515 12:14:01.880692 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:01.880708 kubelet[2713]: W0515 12:14:01.880705 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:01.880768 kubelet[2713]: E0515 12:14:01.880716 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.796018 kubelet[2713]: I0515 12:14:02.795967 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:14:02.796493 kubelet[2713]: E0515 12:14:02.796362 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:02.877290 kubelet[2713]: E0515 12:14:02.877251 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.877290 kubelet[2713]: W0515 12:14:02.877274 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.877290 kubelet[2713]: E0515 12:14:02.877296 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.877547 kubelet[2713]: E0515 12:14:02.877529 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.877547 kubelet[2713]: W0515 12:14:02.877537 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.877547 kubelet[2713]: E0515 12:14:02.877544 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.877765 kubelet[2713]: E0515 12:14:02.877748 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.877765 kubelet[2713]: W0515 12:14:02.877759 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.877846 kubelet[2713]: E0515 12:14:02.877789 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.877980 kubelet[2713]: E0515 12:14:02.877963 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.877980 kubelet[2713]: W0515 12:14:02.877973 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878054 kubelet[2713]: E0515 12:14:02.877981 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.878171 kubelet[2713]: E0515 12:14:02.878156 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.878171 kubelet[2713]: W0515 12:14:02.878166 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878171 kubelet[2713]: E0515 12:14:02.878173 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.878343 kubelet[2713]: E0515 12:14:02.878316 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.878343 kubelet[2713]: W0515 12:14:02.878325 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878343 kubelet[2713]: E0515 12:14:02.878342 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.878501 kubelet[2713]: E0515 12:14:02.878486 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.878501 kubelet[2713]: W0515 12:14:02.878495 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878501 kubelet[2713]: E0515 12:14:02.878502 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.878673 kubelet[2713]: E0515 12:14:02.878655 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.878673 kubelet[2713]: W0515 12:14:02.878667 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878673 kubelet[2713]: E0515 12:14:02.878674 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.878842 kubelet[2713]: E0515 12:14:02.878825 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.878842 kubelet[2713]: W0515 12:14:02.878835 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.878842 kubelet[2713]: E0515 12:14:02.878842 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.879000 kubelet[2713]: E0515 12:14:02.878984 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.879000 kubelet[2713]: W0515 12:14:02.878993 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.879000 kubelet[2713]: E0515 12:14:02.879000 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.879165 kubelet[2713]: E0515 12:14:02.879149 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.879165 kubelet[2713]: W0515 12:14:02.879159 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.879165 kubelet[2713]: E0515 12:14:02.879166 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.879345 kubelet[2713]: E0515 12:14:02.879320 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.879345 kubelet[2713]: W0515 12:14:02.879330 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.879345 kubelet[2713]: E0515 12:14:02.879344 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.879508 kubelet[2713]: E0515 12:14:02.879493 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.879508 kubelet[2713]: W0515 12:14:02.879503 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.879508 kubelet[2713]: E0515 12:14:02.879509 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.879745 kubelet[2713]: E0515 12:14:02.879719 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.879745 kubelet[2713]: W0515 12:14:02.879741 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.879823 kubelet[2713]: E0515 12:14:02.879768 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.880029 kubelet[2713]: E0515 12:14:02.880013 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.880029 kubelet[2713]: W0515 12:14:02.880022 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.880029 kubelet[2713]: E0515 12:14:02.880031 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.883397 kubelet[2713]: E0515 12:14:02.883351 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.883397 kubelet[2713]: W0515 12:14:02.883372 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.883397 kubelet[2713]: E0515 12:14:02.883393 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.883591 kubelet[2713]: E0515 12:14:02.883567 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.883591 kubelet[2713]: W0515 12:14:02.883577 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.883591 kubelet[2713]: E0515 12:14:02.883590 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.883840 kubelet[2713]: E0515 12:14:02.883820 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.883840 kubelet[2713]: W0515 12:14:02.883833 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.883927 kubelet[2713]: E0515 12:14:02.883849 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.884116 kubelet[2713]: E0515 12:14:02.884070 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.884116 kubelet[2713]: W0515 12:14:02.884089 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.884116 kubelet[2713]: E0515 12:14:02.884109 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.884309 kubelet[2713]: E0515 12:14:02.884288 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.884309 kubelet[2713]: W0515 12:14:02.884301 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.884392 kubelet[2713]: E0515 12:14:02.884317 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.884525 kubelet[2713]: E0515 12:14:02.884504 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.884525 kubelet[2713]: W0515 12:14:02.884517 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.884587 kubelet[2713]: E0515 12:14:02.884533 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.884776 kubelet[2713]: E0515 12:14:02.884753 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.884776 kubelet[2713]: W0515 12:14:02.884766 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.884851 kubelet[2713]: E0515 12:14:02.884782 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.885005 kubelet[2713]: E0515 12:14:02.884975 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.885005 kubelet[2713]: W0515 12:14:02.884998 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.885077 kubelet[2713]: E0515 12:14:02.885012 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.885209 kubelet[2713]: E0515 12:14:02.885191 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.885209 kubelet[2713]: W0515 12:14:02.885201 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.885264 kubelet[2713]: E0515 12:14:02.885231 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.885396 kubelet[2713]: E0515 12:14:02.885375 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.885396 kubelet[2713]: W0515 12:14:02.885385 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.885472 kubelet[2713]: E0515 12:14:02.885430 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.885580 kubelet[2713]: E0515 12:14:02.885562 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.885580 kubelet[2713]: W0515 12:14:02.885572 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.885672 kubelet[2713]: E0515 12:14:02.885584 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.885821 kubelet[2713]: E0515 12:14:02.885801 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.885821 kubelet[2713]: W0515 12:14:02.885814 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.885957 kubelet[2713]: E0515 12:14:02.885832 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.886098 kubelet[2713]: E0515 12:14:02.886082 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.886098 kubelet[2713]: W0515 12:14:02.886093 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.886167 kubelet[2713]: E0515 12:14:02.886109 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.886381 kubelet[2713]: E0515 12:14:02.886361 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.886381 kubelet[2713]: W0515 12:14:02.886373 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.886468 kubelet[2713]: E0515 12:14:02.886386 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.886586 kubelet[2713]: E0515 12:14:02.886565 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.886586 kubelet[2713]: W0515 12:14:02.886579 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.886703 kubelet[2713]: E0515 12:14:02.886593 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.886838 kubelet[2713]: E0515 12:14:02.886822 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.886838 kubelet[2713]: W0515 12:14:02.886832 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.886916 kubelet[2713]: E0515 12:14:02.886844 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.887154 kubelet[2713]: E0515 12:14:02.887137 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.887154 kubelet[2713]: W0515 12:14:02.887152 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.887218 kubelet[2713]: E0515 12:14:02.887172 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:02.887369 kubelet[2713]: E0515 12:14:02.887351 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:14:02.887369 kubelet[2713]: W0515 12:14:02.887362 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:14:02.887369 kubelet[2713]: E0515 12:14:02.887369 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:14:03.666603 kubelet[2713]: E0515 12:14:03.666539 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:05.666111 kubelet[2713]: E0515 12:14:05.666048 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:07.439844 containerd[1575]: time="2025-05-15T12:14:07.439765550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:07.477595 containerd[1575]: time="2025-05-15T12:14:07.477536769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 12:14:07.506948 containerd[1575]: time="2025-05-15T12:14:07.506876176Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:07.544024 containerd[1575]: time="2025-05-15T12:14:07.543950714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:07.544792 containerd[1575]: time="2025-05-15T12:14:07.544750701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 8.376540335s" May 15 12:14:07.544792 containerd[1575]: time="2025-05-15T12:14:07.544785006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 12:14:07.546859 containerd[1575]: time="2025-05-15T12:14:07.546800843Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:14:07.666747 kubelet[2713]: E0515 12:14:07.666682 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:07.763284 containerd[1575]: time="2025-05-15T12:14:07.763095187Z" level=info msg="Container ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:07.923417 containerd[1575]: time="2025-05-15T12:14:07.923349840Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\"" May 15 12:14:07.923916 containerd[1575]: time="2025-05-15T12:14:07.923887923Z" level=info msg="StartContainer for \"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\"" May 15 12:14:07.925533 containerd[1575]: time="2025-05-15T12:14:07.925498587Z" level=info msg="connecting to shim ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb" address="unix:///run/containerd/s/df0fd79993785c93a2d99eec73b7b976c5af9124a0c16b9e5ab681c427e5a018" protocol=ttrpc version=3 May 15 12:14:07.953864 systemd[1]: Started cri-containerd-ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb.scope - libcontainer container ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb. May 15 12:14:08.009109 systemd[1]: cri-containerd-ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb.scope: Deactivated successfully. May 15 12:14:08.011711 containerd[1575]: time="2025-05-15T12:14:08.011661256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\" id:\"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\" pid:3436 exited_at:{seconds:1747311248 nanos:11171183}" May 15 12:14:08.063281 containerd[1575]: time="2025-05-15T12:14:08.063125935Z" level=info msg="received exit event container_id:\"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\" id:\"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\" pid:3436 exited_at:{seconds:1747311248 nanos:11171183}" May 15 12:14:08.065603 containerd[1575]: time="2025-05-15T12:14:08.065355143Z" level=info msg="StartContainer for \"ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb\" returns successfully" May 15 12:14:08.087899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb-rootfs.mount: Deactivated successfully. May 15 12:14:08.808343 kubelet[2713]: E0515 12:14:08.808297 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:09.027654 kubelet[2713]: I0515 12:14:09.027554 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8545565456-hxhbr" podStartSLOduration=11.138927639 podStartE2EDuration="16.02753345s" podCreationTimestamp="2025-05-15 12:13:53 +0000 UTC" firstStartedPulling="2025-05-15 12:13:54.279272337 +0000 UTC m=+12.762764130" lastFinishedPulling="2025-05-15 12:13:59.167878138 +0000 UTC m=+17.651369941" observedRunningTime="2025-05-15 12:14:01.807616113 +0000 UTC m=+20.291107916" watchObservedRunningTime="2025-05-15 12:14:09.02753345 +0000 UTC m=+27.511025243" May 15 12:14:09.665968 kubelet[2713]: E0515 12:14:09.665888 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:09.809627 kubelet[2713]: E0515 12:14:09.809584 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:10.813573 kubelet[2713]: E0515 12:14:10.813525 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:10.814470 containerd[1575]: time="2025-05-15T12:14:10.814428753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 12:14:11.666715 kubelet[2713]: E0515 12:14:11.666629 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:13.666515 kubelet[2713]: E0515 12:14:13.666469 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:15.669006 kubelet[2713]: E0515 12:14:15.668956 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:15.868532 containerd[1575]: time="2025-05-15T12:14:15.868444038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:15.873187 containerd[1575]: time="2025-05-15T12:14:15.873054497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 12:14:15.875341 containerd[1575]: time="2025-05-15T12:14:15.875276074Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:15.878419 containerd[1575]: time="2025-05-15T12:14:15.878352069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:15.879132 containerd[1575]: time="2025-05-15T12:14:15.879043790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.064576354s" May 15 12:14:15.879132 containerd[1575]: time="2025-05-15T12:14:15.879088725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 12:14:15.883586 containerd[1575]: time="2025-05-15T12:14:15.883275909Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:14:15.900112 containerd[1575]: time="2025-05-15T12:14:15.900047898Z" level=info msg="Container 7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:15.917505 containerd[1575]: time="2025-05-15T12:14:15.917440494Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\"" May 15 12:14:15.918077 containerd[1575]: time="2025-05-15T12:14:15.918023932Z" level=info msg="StartContainer for \"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\"" May 15 12:14:15.919966 containerd[1575]: time="2025-05-15T12:14:15.919873098Z" level=info msg="connecting to shim 7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5" address="unix:///run/containerd/s/df0fd79993785c93a2d99eec73b7b976c5af9124a0c16b9e5ab681c427e5a018" protocol=ttrpc version=3 May 15 12:14:15.949905 systemd[1]: Started cri-containerd-7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5.scope - libcontainer container 7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5. May 15 12:14:16.001254 containerd[1575]: time="2025-05-15T12:14:16.001195234Z" level=info msg="StartContainer for \"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\" returns successfully" May 15 12:14:16.824955 kubelet[2713]: E0515 12:14:16.824916 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:16.829791 kubelet[2713]: I0515 12:14:16.829734 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:14:16.830105 kubelet[2713]: E0515 12:14:16.830013 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:17.499551 systemd[1]: cri-containerd-7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5.scope: Deactivated successfully. May 15 12:14:17.499914 systemd[1]: cri-containerd-7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5.scope: Consumed 575ms CPU time, 161M memory peak, 8K read from disk, 154M written to disk. May 15 12:14:17.500618 containerd[1575]: time="2025-05-15T12:14:17.500545358Z" level=info msg="received exit event container_id:\"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\" id:\"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\" pid:3494 exited_at:{seconds:1747311257 nanos:500260662}" May 15 12:14:17.501099 containerd[1575]: time="2025-05-15T12:14:17.500691211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\" id:\"7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5\" pid:3494 exited_at:{seconds:1747311257 nanos:500260662}" May 15 12:14:17.523952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5-rootfs.mount: Deactivated successfully. May 15 12:14:17.526600 kubelet[2713]: I0515 12:14:17.526572 2713 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 12:14:17.564538 systemd[1]: Created slice kubepods-burstable-podb38549eb_99b5_40a2_ae9b_05f5e9ea660b.slice - libcontainer container kubepods-burstable-podb38549eb_99b5_40a2_ae9b_05f5e9ea660b.slice. May 15 12:14:17.573975 systemd[1]: Created slice kubepods-burstable-podba385db9_9064_4a1c_b79f_e976220c4dbe.slice - libcontainer container kubepods-burstable-podba385db9_9064_4a1c_b79f_e976220c4dbe.slice. May 15 12:14:17.615752 kubelet[2713]: I0515 12:14:17.615703 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxnj\" (UniqueName: \"kubernetes.io/projected/1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b-kube-api-access-nsxnj\") pod \"calico-apiserver-69b46b7b54-86cls\" (UID: \"1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b\") " pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" May 15 12:14:17.615752 kubelet[2713]: I0515 12:14:17.615754 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b9407b4-3942-47d3-901c-4a0dfdbc7501-calico-apiserver-certs\") pod \"calico-apiserver-69b46b7b54-b8nhc\" (UID: \"9b9407b4-3942-47d3-901c-4a0dfdbc7501\") " pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" May 15 12:14:17.615922 kubelet[2713]: I0515 12:14:17.615781 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtnc\" (UniqueName: \"kubernetes.io/projected/b38549eb-99b5-40a2-ae9b-05f5e9ea660b-kube-api-access-lgtnc\") pod \"coredns-668d6bf9bc-q5qsh\" (UID: \"b38549eb-99b5-40a2-ae9b-05f5e9ea660b\") " pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:17.615922 kubelet[2713]: I0515 12:14:17.615822 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mps4s\" (UniqueName: \"kubernetes.io/projected/ba385db9-9064-4a1c-b79f-e976220c4dbe-kube-api-access-mps4s\") pod \"coredns-668d6bf9bc-qrkcw\" (UID: \"ba385db9-9064-4a1c-b79f-e976220c4dbe\") " pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:17.615922 kubelet[2713]: I0515 12:14:17.615849 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7-tigera-ca-bundle\") pod \"calico-kube-controllers-58bb49fdbb-gjpfj\" (UID: \"5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7\") " pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" May 15 12:14:17.615922 kubelet[2713]: I0515 12:14:17.615869 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhktl\" (UniqueName: \"kubernetes.io/projected/5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7-kube-api-access-zhktl\") pod \"calico-kube-controllers-58bb49fdbb-gjpfj\" (UID: \"5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7\") " pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" May 15 12:14:17.615922 kubelet[2713]: I0515 12:14:17.615897 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b38549eb-99b5-40a2-ae9b-05f5e9ea660b-config-volume\") pod \"coredns-668d6bf9bc-q5qsh\" (UID: \"b38549eb-99b5-40a2-ae9b-05f5e9ea660b\") " pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:17.616055 kubelet[2713]: I0515 12:14:17.615916 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmghm\" (UniqueName: \"kubernetes.io/projected/9b9407b4-3942-47d3-901c-4a0dfdbc7501-kube-api-access-xmghm\") pod \"calico-apiserver-69b46b7b54-b8nhc\" (UID: \"9b9407b4-3942-47d3-901c-4a0dfdbc7501\") " pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" May 15 12:14:17.616055 kubelet[2713]: I0515 12:14:17.615945 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba385db9-9064-4a1c-b79f-e976220c4dbe-config-volume\") pod \"coredns-668d6bf9bc-qrkcw\" (UID: \"ba385db9-9064-4a1c-b79f-e976220c4dbe\") " pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:17.616055 kubelet[2713]: I0515 12:14:17.615974 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b-calico-apiserver-certs\") pod \"calico-apiserver-69b46b7b54-86cls\" (UID: \"1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b\") " pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" May 15 12:14:17.777333 systemd[1]: Created slice kubepods-besteffort-pod5dfbf59c_651b_4f79_b78e_20aa6ddd7ab7.slice - libcontainer container kubepods-besteffort-pod5dfbf59c_651b_4f79_b78e_20aa6ddd7ab7.slice. May 15 12:14:17.783373 systemd[1]: Created slice kubepods-besteffort-pod1a7c59bf_e3ab_47c1_9889_f9a1f7738d4b.slice - libcontainer container kubepods-besteffort-pod1a7c59bf_e3ab_47c1_9889_f9a1f7738d4b.slice. May 15 12:14:17.789313 systemd[1]: Created slice kubepods-besteffort-pod9b9407b4_3942_47d3_901c_4a0dfdbc7501.slice - libcontainer container kubepods-besteffort-pod9b9407b4_3942_47d3_901c_4a0dfdbc7501.slice. May 15 12:14:17.795045 systemd[1]: Created slice kubepods-besteffort-pod90d37edc_6839_41f5_b163_f0eaf370a904.slice - libcontainer container kubepods-besteffort-pod90d37edc_6839_41f5_b163_f0eaf370a904.slice. May 15 12:14:17.801419 containerd[1575]: time="2025-05-15T12:14:17.800178168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bb49fdbb-gjpfj,Uid:5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7,Namespace:calico-system,Attempt:0,}" May 15 12:14:17.801419 containerd[1575]: time="2025-05-15T12:14:17.800235196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-86cls,Uid:1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b,Namespace:calico-apiserver,Attempt:0,}" May 15 12:14:17.801419 containerd[1575]: time="2025-05-15T12:14:17.800370700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-b8nhc,Uid:9b9407b4-3942-47d3-901c-4a0dfdbc7501,Namespace:calico-apiserver,Attempt:0,}" May 15 12:14:17.801419 containerd[1575]: time="2025-05-15T12:14:17.800393473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,}" May 15 12:14:17.826404 kubelet[2713]: E0515 12:14:17.826362 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:17.876841 kubelet[2713]: E0515 12:14:17.876770 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:17.877369 containerd[1575]: time="2025-05-15T12:14:17.877309998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,}" May 15 12:14:17.879190 kubelet[2713]: E0515 12:14:17.879120 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:17.879593 containerd[1575]: time="2025-05-15T12:14:17.879559417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,}" May 15 12:14:18.857363 kubelet[2713]: E0515 12:14:18.857326 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:18.858615 containerd[1575]: time="2025-05-15T12:14:18.858430438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:14:19.004218 containerd[1575]: time="2025-05-15T12:14:19.004152813Z" level=error msg="Failed to destroy network for sandbox \"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.006381 systemd[1]: run-netns-cni\x2dc4465593\x2d7e4f\x2d3673\x2dbd8f\x2dc9e3447e0c93.mount: Deactivated successfully. May 15 12:14:19.080524 containerd[1575]: time="2025-05-15T12:14:19.080463961Z" level=error msg="Failed to destroy network for sandbox \"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.084893 containerd[1575]: time="2025-05-15T12:14:19.084857899Z" level=error msg="Failed to destroy network for sandbox \"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.104870 containerd[1575]: time="2025-05-15T12:14:19.104813925Z" level=error msg="Failed to destroy network for sandbox \"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.123500 containerd[1575]: time="2025-05-15T12:14:19.123141480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bb49fdbb-gjpfj,Uid:5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.123710 kubelet[2713]: E0515 12:14:19.123567 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.124250 kubelet[2713]: E0515 12:14:19.124188 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" May 15 12:14:19.124250 kubelet[2713]: E0515 12:14:19.124231 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" May 15 12:14:19.124473 kubelet[2713]: E0515 12:14:19.124294 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58bb49fdbb-gjpfj_calico-system(5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58bb49fdbb-gjpfj_calico-system(5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36bdfcdc2e6f4c825e7a6fa4315a6e71ab8e650b7331c7666d54e20b1173a7b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" podUID="5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7" May 15 12:14:19.165824 containerd[1575]: time="2025-05-15T12:14:19.165706987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-86cls,Uid:1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.167846 kubelet[2713]: E0515 12:14:19.167805 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.167999 kubelet[2713]: E0515 12:14:19.167963 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" May 15 12:14:19.168089 kubelet[2713]: E0515 12:14:19.168074 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" May 15 12:14:19.168297 kubelet[2713]: E0515 12:14:19.168184 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b46b7b54-86cls_calico-apiserver(1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b46b7b54-86cls_calico-apiserver(1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df02e8a81465afef192b12179125c09ffb36f0b5925a1d2b0d68eb658c30b44b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" podUID="1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b" May 15 12:14:19.176761 containerd[1575]: time="2025-05-15T12:14:19.176676633Z" level=error msg="Failed to destroy network for sandbox \"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.178986 containerd[1575]: time="2025-05-15T12:14:19.178940978Z" level=error msg="Failed to destroy network for sandbox \"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.183796 containerd[1575]: time="2025-05-15T12:14:19.183744757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-b8nhc,Uid:9b9407b4-3942-47d3-901c-4a0dfdbc7501,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.184012 kubelet[2713]: E0515 12:14:19.183967 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.184078 kubelet[2713]: E0515 12:14:19.184038 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" May 15 12:14:19.184078 kubelet[2713]: E0515 12:14:19.184061 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" May 15 12:14:19.184123 kubelet[2713]: E0515 12:14:19.184103 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b46b7b54-b8nhc_calico-apiserver(9b9407b4-3942-47d3-901c-4a0dfdbc7501)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b46b7b54-b8nhc_calico-apiserver(9b9407b4-3942-47d3-901c-4a0dfdbc7501)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0536fbad6cb9b60135517299366cc43ec89483adf5c20521ca307af5481233\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" podUID="9b9407b4-3942-47d3-901c-4a0dfdbc7501" May 15 12:14:19.212854 containerd[1575]: time="2025-05-15T12:14:19.212808269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.213078 kubelet[2713]: E0515 12:14:19.213027 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.213078 kubelet[2713]: E0515 12:14:19.213078 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7xqh" May 15 12:14:19.213297 kubelet[2713]: E0515 12:14:19.213095 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7xqh" May 15 12:14:19.213297 kubelet[2713]: E0515 12:14:19.213130 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7xqh_calico-system(90d37edc-6839-41f5-b163-f0eaf370a904)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7xqh_calico-system(90d37edc-6839-41f5-b163-f0eaf370a904)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"126561a1ded3f9cadf7379cf14a8ef615088154cdb3e90ecfb80eab88e5e3ac4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:19.256598 containerd[1575]: time="2025-05-15T12:14:19.256532404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.256856 kubelet[2713]: E0515 12:14:19.256820 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.256924 kubelet[2713]: E0515 12:14:19.256885 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:19.256924 kubelet[2713]: E0515 12:14:19.256907 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:19.256985 kubelet[2713]: E0515 12:14:19.256949 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q5qsh_kube-system(b38549eb-99b5-40a2-ae9b-05f5e9ea660b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q5qsh_kube-system(b38549eb-99b5-40a2-ae9b-05f5e9ea660b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e4dd14d3e345648b72802faf26de65e33f399cb7c65dac5283c776ea4efe3fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q5qsh" podUID="b38549eb-99b5-40a2-ae9b-05f5e9ea660b" May 15 12:14:19.321040 containerd[1575]: time="2025-05-15T12:14:19.320866493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.321446 kubelet[2713]: E0515 12:14:19.321384 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:19.321616 kubelet[2713]: E0515 12:14:19.321459 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:19.321616 kubelet[2713]: E0515 12:14:19.321486 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:19.321616 kubelet[2713]: E0515 12:14:19.321551 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qrkcw_kube-system(ba385db9-9064-4a1c-b79f-e976220c4dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qrkcw_kube-system(ba385db9-9064-4a1c-b79f-e976220c4dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c268af36e3654cbd479cc616fb8f06b8d35fcdd11e22ecfd144408666639142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qrkcw" podUID="ba385db9-9064-4a1c-b79f-e976220c4dbe" May 15 12:14:19.524893 systemd[1]: run-netns-cni\x2d3449fd91\x2db5bf\x2d8ef8\x2ddedf\x2de5cb2ccdf9d2.mount: Deactivated successfully. May 15 12:14:19.525017 systemd[1]: run-netns-cni\x2d9ff31757\x2d80ab\x2d0ed1\x2dc316\x2d9868565249ed.mount: Deactivated successfully. May 15 12:14:19.525092 systemd[1]: run-netns-cni\x2db36f1977\x2de7c9\x2d10f8\x2d9f2e\x2d72622d045bdc.mount: Deactivated successfully. May 15 12:14:19.525177 systemd[1]: run-netns-cni\x2d176a363e\x2d86cc\x2d731c\x2d485d\x2dd2244e5b7ed3.mount: Deactivated successfully. May 15 12:14:27.845237 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:33056.service - OpenSSH per-connection server daemon (10.0.0.1:33056). May 15 12:14:27.905254 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 33056 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:27.907159 sshd-session[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:27.915243 systemd-logind[1559]: New session 8 of user core. May 15 12:14:27.932925 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:14:28.085316 sshd[3772]: Connection closed by 10.0.0.1 port 33056 May 15 12:14:28.085704 sshd-session[3766]: pam_unix(sshd:session): session closed for user core May 15 12:14:28.090665 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:33056.service: Deactivated successfully. May 15 12:14:28.094136 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:14:28.095369 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. May 15 12:14:28.097068 systemd-logind[1559]: Removed session 8. May 15 12:14:30.005716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976511349.mount: Deactivated successfully. May 15 12:14:30.666921 kubelet[2713]: E0515 12:14:30.666748 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:30.666921 kubelet[2713]: E0515 12:14:30.666748 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:30.667385 containerd[1575]: time="2025-05-15T12:14:30.667196171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,}" May 15 12:14:30.667604 containerd[1575]: time="2025-05-15T12:14:30.667400766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,}" May 15 12:14:30.667604 containerd[1575]: time="2025-05-15T12:14:30.667537132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,}" May 15 12:14:31.290664 containerd[1575]: time="2025-05-15T12:14:31.290591919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:31.291766 containerd[1575]: time="2025-05-15T12:14:31.291737049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:14:31.293216 containerd[1575]: time="2025-05-15T12:14:31.293184376Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:31.309891 containerd[1575]: time="2025-05-15T12:14:31.309824006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:31.310775 containerd[1575]: time="2025-05-15T12:14:31.310741328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 12.452275464s" May 15 12:14:31.310878 containerd[1575]: time="2025-05-15T12:14:31.310855662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 12:14:31.360398 containerd[1575]: time="2025-05-15T12:14:31.360360202Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:14:31.369391 containerd[1575]: time="2025-05-15T12:14:31.369131581Z" level=error msg="Failed to destroy network for sandbox \"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.371937 systemd[1]: run-netns-cni\x2d7ed0eb7d\x2dceea\x2d1d51\x2d78bf\x2d22bd1e4406c0.mount: Deactivated successfully. May 15 12:14:31.372354 containerd[1575]: time="2025-05-15T12:14:31.372145149Z" level=error msg="Failed to destroy network for sandbox \"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.377502 containerd[1575]: time="2025-05-15T12:14:31.377462101Z" level=error msg="Failed to destroy network for sandbox \"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.557576 containerd[1575]: time="2025-05-15T12:14:31.557135421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.558367 kubelet[2713]: E0515 12:14:31.557427 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.558367 kubelet[2713]: E0515 12:14:31.557497 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:31.558367 kubelet[2713]: E0515 12:14:31.557521 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5qsh" May 15 12:14:31.558529 kubelet[2713]: E0515 12:14:31.557564 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q5qsh_kube-system(b38549eb-99b5-40a2-ae9b-05f5e9ea660b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q5qsh_kube-system(b38549eb-99b5-40a2-ae9b-05f5e9ea660b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df16b4447f500f1611946e6f1d6a5dd8e83a587105c71b258ee23aab7a56b3b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q5qsh" podUID="b38549eb-99b5-40a2-ae9b-05f5e9ea660b" May 15 12:14:31.717628 containerd[1575]: time="2025-05-15T12:14:31.717580462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.718135 kubelet[2713]: E0515 12:14:31.717729 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.718135 kubelet[2713]: E0515 12:14:31.717781 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7xqh" May 15 12:14:31.718135 kubelet[2713]: E0515 12:14:31.717801 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7xqh" May 15 12:14:31.718431 kubelet[2713]: E0515 12:14:31.717839 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7xqh_calico-system(90d37edc-6839-41f5-b163-f0eaf370a904)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7xqh_calico-system(90d37edc-6839-41f5-b163-f0eaf370a904)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"915cfb639b412d2692956861513bc539c45e7a44ef128355366e1b2680fb2e95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7xqh" podUID="90d37edc-6839-41f5-b163-f0eaf370a904" May 15 12:14:31.969213 containerd[1575]: time="2025-05-15T12:14:31.968894923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.969423 kubelet[2713]: E0515 12:14:31.969156 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:14:31.969423 kubelet[2713]: E0515 12:14:31.969221 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:31.969423 kubelet[2713]: E0515 12:14:31.969241 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qrkcw" May 15 12:14:31.969634 kubelet[2713]: E0515 12:14:31.969294 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qrkcw_kube-system(ba385db9-9064-4a1c-b79f-e976220c4dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qrkcw_kube-system(ba385db9-9064-4a1c-b79f-e976220c4dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3cea6a988fb57eb7ba6765916f56e94264b9c9b473f9d7197aac4268af13ec7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qrkcw" podUID="ba385db9-9064-4a1c-b79f-e976220c4dbe" May 15 12:14:31.990966 containerd[1575]: time="2025-05-15T12:14:31.990894445Z" level=info msg="Container 612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:32.094398 containerd[1575]: time="2025-05-15T12:14:32.094333613Z" level=info msg="CreateContainer within sandbox \"00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\"" May 15 12:14:32.094922 containerd[1575]: time="2025-05-15T12:14:32.094891641Z" level=info msg="StartContainer for \"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\"" May 15 12:14:32.096403 containerd[1575]: time="2025-05-15T12:14:32.096376979Z" level=info msg="connecting to shim 612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa" address="unix:///run/containerd/s/df0fd79993785c93a2d99eec73b7b976c5af9124a0c16b9e5ab681c427e5a018" protocol=ttrpc version=3 May 15 12:14:32.161951 systemd[1]: Started cri-containerd-612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa.scope - libcontainer container 612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa. May 15 12:14:32.214251 containerd[1575]: time="2025-05-15T12:14:32.214213354Z" level=info msg="StartContainer for \"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" returns successfully" May 15 12:14:32.267925 kubelet[2713]: E0515 12:14:32.266522 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:32.270512 systemd[1]: run-netns-cni\x2d1b54d1aa\x2de953\x2d1532\x2dc630\x2d5cfe3c6566b4.mount: Deactivated successfully. May 15 12:14:32.270716 systemd[1]: run-netns-cni\x2d49e3572d\x2dff71\x2db8fa\x2dfd15\x2dd41f4153fa09.mount: Deactivated successfully. May 15 12:14:32.276516 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 12:14:32.278375 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 12:14:32.290224 kubelet[2713]: I0515 12:14:32.289908 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6x577" podStartSLOduration=2.304855974 podStartE2EDuration="39.289562277s" podCreationTimestamp="2025-05-15 12:13:53 +0000 UTC" firstStartedPulling="2025-05-15 12:13:54.358205412 +0000 UTC m=+12.841697195" lastFinishedPulling="2025-05-15 12:14:31.342911705 +0000 UTC m=+49.826403498" observedRunningTime="2025-05-15 12:14:32.289114266 +0000 UTC m=+50.772606059" watchObservedRunningTime="2025-05-15 12:14:32.289562277 +0000 UTC m=+50.773054060" May 15 12:14:32.416054 containerd[1575]: time="2025-05-15T12:14:32.415995802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"cefa839e748996a2fad6e4aee268b26ff712983409efd9d724ad1ac5aa1469fb\" pid:3965 exit_status:1 exited_at:{seconds:1747311272 nanos:415372221}" May 15 12:14:33.100069 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:33070.service - OpenSSH per-connection server daemon (10.0.0.1:33070). May 15 12:14:33.172580 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 33070 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:33.174570 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:33.181595 systemd-logind[1559]: New session 9 of user core. May 15 12:14:33.189936 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:14:33.267125 kubelet[2713]: E0515 12:14:33.267086 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:33.338082 sshd[4000]: Connection closed by 10.0.0.1 port 33070 May 15 12:14:33.338735 sshd-session[3998]: pam_unix(sshd:session): session closed for user core May 15 12:14:33.342059 containerd[1575]: time="2025-05-15T12:14:33.341964113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"7727ecf55972f03492113dae1eaa3d39ef1cf552fa3a8cc03d591fabd78853c6\" pid:4022 exit_status:1 exited_at:{seconds:1747311273 nanos:341579361}" May 15 12:14:33.344476 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:33070.service: Deactivated successfully. May 15 12:14:33.346592 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:14:33.347886 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. May 15 12:14:33.349334 systemd-logind[1559]: Removed session 9. May 15 12:14:33.667372 containerd[1575]: time="2025-05-15T12:14:33.667318881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-86cls,Uid:1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b,Namespace:calico-apiserver,Attempt:0,}" May 15 12:14:34.149625 systemd-networkd[1509]: calid3910948723: Link UP May 15 12:14:34.149873 systemd-networkd[1509]: calid3910948723: Gained carrier May 15 12:14:34.304597 systemd-networkd[1509]: vxlan.calico: Link UP May 15 12:14:34.304618 systemd-networkd[1509]: vxlan.calico: Gained carrier May 15 12:14:34.325394 containerd[1575]: 2025-05-15 12:14:33.922 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 12:14:34.325394 containerd[1575]: 2025-05-15 12:14:33.960 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0 calico-apiserver-69b46b7b54- calico-apiserver 1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b 733 0 2025-05-15 12:13:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b46b7b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69b46b7b54-86cls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid3910948723 [] []}} ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-" May 15 12:14:34.325394 containerd[1575]: 2025-05-15 12:14:33.961 [INFO][4071] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.325394 containerd[1575]: 2025-05-15 12:14:34.067 [INFO][4148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" HandleID="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Workload="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.079 [INFO][4148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" HandleID="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Workload="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005434a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b46b7b54-86cls", "timestamp":"2025-05-15 12:14:34.066759308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.079 [INFO][4148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.080 [INFO][4148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.080 [INFO][4148] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.082 [INFO][4148] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" host="localhost" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.088 [INFO][4148] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.092 [INFO][4148] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.094 [INFO][4148] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.095 [INFO][4148] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.325705 containerd[1575]: 2025-05-15 12:14:34.095 [INFO][4148] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" host="localhost" May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.097 [INFO][4148] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56 May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.107 [INFO][4148] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" host="localhost" May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.138 [INFO][4148] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" host="localhost" May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.138 [INFO][4148] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" host="localhost" May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.138 [INFO][4148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:34.325996 containerd[1575]: 2025-05-15 12:14:34.138 [INFO][4148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" HandleID="k8s-pod-network.4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Workload="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.326165 containerd[1575]: 2025-05-15 12:14:34.142 [INFO][4071] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0", GenerateName:"calico-apiserver-69b46b7b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b46b7b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b46b7b54-86cls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3910948723", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.326219 containerd[1575]: 2025-05-15 12:14:34.142 [INFO][4071] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.326219 containerd[1575]: 2025-05-15 12:14:34.142 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3910948723 ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.326219 containerd[1575]: 2025-05-15 12:14:34.149 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.326351 containerd[1575]: 2025-05-15 12:14:34.152 [INFO][4071] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0", GenerateName:"calico-apiserver-69b46b7b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b46b7b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56", Pod:"calico-apiserver-69b46b7b54-86cls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3910948723", MAC:"b2:eb:c7:87:cc:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.326407 containerd[1575]: 2025-05-15 12:14:34.317 [INFO][4071] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-86cls" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--86cls-eth0" May 15 12:14:34.647777 containerd[1575]: time="2025-05-15T12:14:34.647715368Z" level=info msg="connecting to shim 4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56" address="unix:///run/containerd/s/546118e7b0c1a767efa08db5e69a975a212f03c934f5093cd22cb749646057ea" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:34.668260 containerd[1575]: time="2025-05-15T12:14:34.668199649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-b8nhc,Uid:9b9407b4-3942-47d3-901c-4a0dfdbc7501,Namespace:calico-apiserver,Attempt:0,}" May 15 12:14:34.668442 containerd[1575]: time="2025-05-15T12:14:34.668273017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bb49fdbb-gjpfj,Uid:5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7,Namespace:calico-system,Attempt:0,}" May 15 12:14:34.683817 systemd[1]: Started cri-containerd-4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56.scope - libcontainer container 4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56. May 15 12:14:34.704965 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:34.745786 containerd[1575]: time="2025-05-15T12:14:34.745479802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-86cls,Uid:1a7c59bf-e3ab-47c1-9889-f9a1f7738d4b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56\"" May 15 12:14:34.747984 containerd[1575]: time="2025-05-15T12:14:34.747960779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:14:34.804922 systemd-networkd[1509]: calib9461cf8c45: Link UP May 15 12:14:34.805181 systemd-networkd[1509]: calib9461cf8c45: Gained carrier May 15 12:14:34.817882 containerd[1575]: 2025-05-15 12:14:34.715 [INFO][4301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0 calico-apiserver-69b46b7b54- calico-apiserver 9b9407b4-3942-47d3-901c-4a0dfdbc7501 736 0 2025-05-15 12:13:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b46b7b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69b46b7b54-b8nhc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9461cf8c45 [] []}} ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-" May 15 12:14:34.817882 containerd[1575]: 2025-05-15 12:14:34.715 [INFO][4301] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.817882 containerd[1575]: 2025-05-15 12:14:34.753 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" HandleID="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Workload="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.763 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" HandleID="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Workload="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b46b7b54-b8nhc", "timestamp":"2025-05-15 12:14:34.753330289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.763 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.764 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.764 [INFO][4337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.769 [INFO][4337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" host="localhost" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.774 [INFO][4337] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.780 [INFO][4337] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.782 [INFO][4337] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.784 [INFO][4337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.818108 containerd[1575]: 2025-05-15 12:14:34.784 [INFO][4337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" host="localhost" May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.786 [INFO][4337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8 May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.793 [INFO][4337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" host="localhost" May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" host="localhost" May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" host="localhost" May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:34.818374 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" HandleID="k8s-pod-network.b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Workload="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.818524 containerd[1575]: 2025-05-15 12:14:34.801 [INFO][4301] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0", GenerateName:"calico-apiserver-69b46b7b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9407b4-3942-47d3-901c-4a0dfdbc7501", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b46b7b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b46b7b54-b8nhc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9461cf8c45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.818603 containerd[1575]: 2025-05-15 12:14:34.801 [INFO][4301] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.818603 containerd[1575]: 2025-05-15 12:14:34.801 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9461cf8c45 ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.818603 containerd[1575]: 2025-05-15 12:14:34.804 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.818732 containerd[1575]: 2025-05-15 12:14:34.804 [INFO][4301] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0", GenerateName:"calico-apiserver-69b46b7b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9407b4-3942-47d3-901c-4a0dfdbc7501", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b46b7b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8", Pod:"calico-apiserver-69b46b7b54-b8nhc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9461cf8c45", MAC:"a2:8b:3f:e1:0e:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.818801 containerd[1575]: 2025-05-15 12:14:34.814 [INFO][4301] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" Namespace="calico-apiserver" Pod="calico-apiserver-69b46b7b54-b8nhc" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b46b7b54--b8nhc-eth0" May 15 12:14:34.865148 containerd[1575]: time="2025-05-15T12:14:34.865061931Z" level=info msg="connecting to shim b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8" address="unix:///run/containerd/s/f08d03e3f712b40d28808d31e75bb2c8865c95b56e7a6f8de05839b7517cee29" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:34.892900 systemd[1]: Started cri-containerd-b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8.scope - libcontainer container b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8. May 15 12:14:34.909822 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:34.913311 systemd-networkd[1509]: calibf0876f2b9e: Link UP May 15 12:14:34.914357 systemd-networkd[1509]: calibf0876f2b9e: Gained carrier May 15 12:14:34.931859 containerd[1575]: 2025-05-15 12:14:34.718 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0 calico-kube-controllers-58bb49fdbb- calico-system 5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7 732 0 2025-05-15 12:13:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58bb49fdbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58bb49fdbb-gjpfj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibf0876f2b9e [] []}} ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-" May 15 12:14:34.931859 containerd[1575]: 2025-05-15 12:14:34.718 [INFO][4312] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.931859 containerd[1575]: 2025-05-15 12:14:34.768 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" HandleID="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Workload="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.776 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" HandleID="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Workload="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58bb49fdbb-gjpfj", "timestamp":"2025-05-15 12:14:34.768470208 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.776 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.798 [INFO][4344] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.869 [INFO][4344] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" host="localhost" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.877 [INFO][4344] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.882 [INFO][4344] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.884 [INFO][4344] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.887 [INFO][4344] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:34.932108 containerd[1575]: 2025-05-15 12:14:34.887 [INFO][4344] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" host="localhost" May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.889 [INFO][4344] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.894 [INFO][4344] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" host="localhost" May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.904 [INFO][4344] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" host="localhost" May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.904 [INFO][4344] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" host="localhost" May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.904 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:34.932386 containerd[1575]: 2025-05-15 12:14:34.904 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" HandleID="k8s-pod-network.713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Workload="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.932632 containerd[1575]: 2025-05-15 12:14:34.910 [INFO][4312] cni-plugin/k8s.go 386: Populated endpoint ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0", GenerateName:"calico-kube-controllers-58bb49fdbb-", Namespace:"calico-system", SelfLink:"", UID:"5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bb49fdbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58bb49fdbb-gjpfj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf0876f2b9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.932709 containerd[1575]: 2025-05-15 12:14:34.910 [INFO][4312] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.932709 containerd[1575]: 2025-05-15 12:14:34.910 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf0876f2b9e ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.932709 containerd[1575]: 2025-05-15 12:14:34.914 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.932783 containerd[1575]: 2025-05-15 12:14:34.915 [INFO][4312] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0", GenerateName:"calico-kube-controllers-58bb49fdbb-", Namespace:"calico-system", SelfLink:"", UID:"5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bb49fdbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb", Pod:"calico-kube-controllers-58bb49fdbb-gjpfj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf0876f2b9e", MAC:"e6:a9:c3:b4:c9:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:34.932834 containerd[1575]: 2025-05-15 12:14:34.928 [INFO][4312] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" Namespace="calico-system" Pod="calico-kube-controllers-58bb49fdbb-gjpfj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bb49fdbb--gjpfj-eth0" May 15 12:14:34.953520 containerd[1575]: time="2025-05-15T12:14:34.953465144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b46b7b54-b8nhc,Uid:9b9407b4-3942-47d3-901c-4a0dfdbc7501,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8\"" May 15 12:14:34.969275 containerd[1575]: time="2025-05-15T12:14:34.969204358Z" level=info msg="connecting to shim 713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb" address="unix:///run/containerd/s/2e46afd3885c687e9cd6d6626a95c1a417689a05da9c23b8527478be76841109" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:34.997861 systemd[1]: Started cri-containerd-713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb.scope - libcontainer container 713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb. May 15 12:14:35.013969 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:35.048035 containerd[1575]: time="2025-05-15T12:14:35.047983502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bb49fdbb-gjpfj,Uid:5dfbf59c-651b-4f79-b78e-20aa6ddd7ab7,Namespace:calico-system,Attempt:0,} returns sandbox id \"713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb\"" May 15 12:14:35.376884 systemd-networkd[1509]: calid3910948723: Gained IPv6LL May 15 12:14:35.824921 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL May 15 12:14:35.888877 systemd-networkd[1509]: calib9461cf8c45: Gained IPv6LL May 15 12:14:36.080860 systemd-networkd[1509]: calibf0876f2b9e: Gained IPv6LL May 15 12:14:38.358979 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:47812.service - OpenSSH per-connection server daemon (10.0.0.1:47812). May 15 12:14:38.421397 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 47812 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:38.423241 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:38.427965 systemd-logind[1559]: New session 10 of user core. May 15 12:14:38.438929 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:14:38.557261 sshd[4491]: Connection closed by 10.0.0.1 port 47812 May 15 12:14:38.557570 sshd-session[4489]: pam_unix(sshd:session): session closed for user core May 15 12:14:38.562431 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:47812.service: Deactivated successfully. May 15 12:14:38.565087 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:14:38.566018 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. May 15 12:14:38.567975 systemd-logind[1559]: Removed session 10. May 15 12:14:41.392682 containerd[1575]: time="2025-05-15T12:14:41.392599491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:41.393345 containerd[1575]: time="2025-05-15T12:14:41.393321556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 15 12:14:41.394601 containerd[1575]: time="2025-05-15T12:14:41.394557546Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:41.396703 containerd[1575]: time="2025-05-15T12:14:41.396676382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:41.397230 containerd[1575]: time="2025-05-15T12:14:41.397208039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 6.649139308s" May 15 12:14:41.397302 containerd[1575]: time="2025-05-15T12:14:41.397231493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:14:41.398039 containerd[1575]: time="2025-05-15T12:14:41.398016006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:14:41.399139 containerd[1575]: time="2025-05-15T12:14:41.399111662Z" level=info msg="CreateContainer within sandbox \"4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:14:41.408931 containerd[1575]: time="2025-05-15T12:14:41.408873731Z" level=info msg="Container dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:41.417721 containerd[1575]: time="2025-05-15T12:14:41.417660550Z" level=info msg="CreateContainer within sandbox \"4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2\"" May 15 12:14:41.418222 containerd[1575]: time="2025-05-15T12:14:41.418181579Z" level=info msg="StartContainer for \"dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2\"" May 15 12:14:41.419243 containerd[1575]: time="2025-05-15T12:14:41.419194089Z" level=info msg="connecting to shim dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2" address="unix:///run/containerd/s/546118e7b0c1a767efa08db5e69a975a212f03c934f5093cd22cb749646057ea" protocol=ttrpc version=3 May 15 12:14:41.445915 systemd[1]: Started cri-containerd-dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2.scope - libcontainer container dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2. May 15 12:14:41.499334 containerd[1575]: time="2025-05-15T12:14:41.499281791Z" level=info msg="StartContainer for \"dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2\" returns successfully" May 15 12:14:42.072236 containerd[1575]: time="2025-05-15T12:14:42.072167848Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:42.073349 containerd[1575]: time="2025-05-15T12:14:42.073288281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:14:42.075094 containerd[1575]: time="2025-05-15T12:14:42.075063733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 677.020285ms" May 15 12:14:42.075094 containerd[1575]: time="2025-05-15T12:14:42.075093949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:14:42.076001 containerd[1575]: time="2025-05-15T12:14:42.075962118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 12:14:42.077301 containerd[1575]: time="2025-05-15T12:14:42.077265574Z" level=info msg="CreateContainer within sandbox \"b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:14:42.086376 containerd[1575]: time="2025-05-15T12:14:42.086313373Z" level=info msg="Container de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:42.095366 containerd[1575]: time="2025-05-15T12:14:42.095316228Z" level=info msg="CreateContainer within sandbox \"b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc\"" May 15 12:14:42.096248 containerd[1575]: time="2025-05-15T12:14:42.096202089Z" level=info msg="StartContainer for \"de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc\"" May 15 12:14:42.097688 containerd[1575]: time="2025-05-15T12:14:42.097616824Z" level=info msg="connecting to shim de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc" address="unix:///run/containerd/s/f08d03e3f712b40d28808d31e75bb2c8865c95b56e7a6f8de05839b7517cee29" protocol=ttrpc version=3 May 15 12:14:42.121883 systemd[1]: Started cri-containerd-de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc.scope - libcontainer container de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc. May 15 12:14:42.190198 containerd[1575]: time="2025-05-15T12:14:42.190133777Z" level=info msg="StartContainer for \"de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc\" returns successfully" May 15 12:14:42.316512 kubelet[2713]: I0515 12:14:42.316444 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b46b7b54-b8nhc" podStartSLOduration=42.195524941 podStartE2EDuration="49.316429897s" podCreationTimestamp="2025-05-15 12:13:53 +0000 UTC" firstStartedPulling="2025-05-15 12:14:34.954837911 +0000 UTC m=+53.438329704" lastFinishedPulling="2025-05-15 12:14:42.075742867 +0000 UTC m=+60.559234660" observedRunningTime="2025-05-15 12:14:42.314113921 +0000 UTC m=+60.797605714" watchObservedRunningTime="2025-05-15 12:14:42.316429897 +0000 UTC m=+60.799921690" May 15 12:14:42.327466 kubelet[2713]: I0515 12:14:42.327247 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b46b7b54-86cls" podStartSLOduration=42.676876478 podStartE2EDuration="49.327235014s" podCreationTimestamp="2025-05-15 12:13:53 +0000 UTC" firstStartedPulling="2025-05-15 12:14:34.747532195 +0000 UTC m=+53.231023988" lastFinishedPulling="2025-05-15 12:14:41.397890731 +0000 UTC m=+59.881382524" observedRunningTime="2025-05-15 12:14:42.326478865 +0000 UTC m=+60.809970658" watchObservedRunningTime="2025-05-15 12:14:42.327235014 +0000 UTC m=+60.810726807" May 15 12:14:42.667026 containerd[1575]: time="2025-05-15T12:14:42.666906383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,}" May 15 12:14:42.948138 systemd-networkd[1509]: cali6eb43cd7195: Link UP May 15 12:14:42.948525 systemd-networkd[1509]: cali6eb43cd7195: Gained carrier May 15 12:14:42.963837 containerd[1575]: 2025-05-15 12:14:42.872 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c7xqh-eth0 csi-node-driver- calico-system 90d37edc-6839-41f5-b163-f0eaf370a904 602 0 2025-05-15 12:13:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c7xqh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6eb43cd7195 [] []}} ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-" May 15 12:14:42.963837 containerd[1575]: 2025-05-15 12:14:42.872 [INFO][4594] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.963837 containerd[1575]: 2025-05-15 12:14:42.904 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" HandleID="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Workload="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.912 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" HandleID="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Workload="localhost-k8s-csi--node--driver--c7xqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036b710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c7xqh", "timestamp":"2025-05-15 12:14:42.904799291 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.913 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.913 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.913 [INFO][4610] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.915 [INFO][4610] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" host="localhost" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.920 [INFO][4610] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.925 [INFO][4610] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.927 [INFO][4610] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.930 [INFO][4610] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:42.964147 containerd[1575]: 2025-05-15 12:14:42.930 [INFO][4610] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" host="localhost" May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.932 [INFO][4610] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19 May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.936 [INFO][4610] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" host="localhost" May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.941 [INFO][4610] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" host="localhost" May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.941 [INFO][4610] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" host="localhost" May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.941 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:42.964480 containerd[1575]: 2025-05-15 12:14:42.941 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" HandleID="k8s-pod-network.15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Workload="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.964600 containerd[1575]: 2025-05-15 12:14:42.945 [INFO][4594] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7xqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90d37edc-6839-41f5-b163-f0eaf370a904", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c7xqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6eb43cd7195", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:42.964600 containerd[1575]: 2025-05-15 12:14:42.945 [INFO][4594] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.965757 containerd[1575]: 2025-05-15 12:14:42.945 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eb43cd7195 ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.965757 containerd[1575]: 2025-05-15 12:14:42.948 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.966095 containerd[1575]: 2025-05-15 12:14:42.949 [INFO][4594] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7xqh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"90d37edc-6839-41f5-b163-f0eaf370a904", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19", Pod:"csi-node-driver-c7xqh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6eb43cd7195", MAC:"f6:04:f4:8b:a4:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:42.966155 containerd[1575]: 2025-05-15 12:14:42.959 [INFO][4594] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" Namespace="calico-system" Pod="csi-node-driver-c7xqh" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7xqh-eth0" May 15 12:14:42.996737 containerd[1575]: time="2025-05-15T12:14:42.996681904Z" level=info msg="connecting to shim 15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19" address="unix:///run/containerd/s/9ded0d2eb5ca9f21842b79e89ddd354c3bcf92a39b4a8235fba914ff8e140ab4" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:43.037935 systemd[1]: Started cri-containerd-15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19.scope - libcontainer container 15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19. May 15 12:14:43.056168 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:43.079429 containerd[1575]: time="2025-05-15T12:14:43.079357775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7xqh,Uid:90d37edc-6839-41f5-b163-f0eaf370a904,Namespace:calico-system,Attempt:0,} returns sandbox id \"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19\"" May 15 12:14:43.307342 kubelet[2713]: I0515 12:14:43.306348 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:14:43.570373 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:58326.service - OpenSSH per-connection server daemon (10.0.0.1:58326). May 15 12:14:43.632591 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 58326 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:43.634392 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:43.640138 systemd-logind[1559]: New session 11 of user core. May 15 12:14:43.647900 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:14:43.808957 sshd[4683]: Connection closed by 10.0.0.1 port 58326 May 15 12:14:43.809333 sshd-session[4681]: pam_unix(sshd:session): session closed for user core May 15 12:14:43.814979 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:58326.service: Deactivated successfully. May 15 12:14:43.817585 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:14:43.818671 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. May 15 12:14:43.820689 systemd-logind[1559]: Removed session 11. May 15 12:14:44.464907 systemd-networkd[1509]: cali6eb43cd7195: Gained IPv6LL May 15 12:14:44.666637 kubelet[2713]: E0515 12:14:44.666582 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:44.667163 containerd[1575]: time="2025-05-15T12:14:44.667105142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,}" May 15 12:14:44.782346 systemd-networkd[1509]: cali5bbea2adcf0: Link UP May 15 12:14:44.783078 systemd-networkd[1509]: cali5bbea2adcf0: Gained carrier May 15 12:14:44.795019 containerd[1575]: 2025-05-15 12:14:44.710 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0 coredns-668d6bf9bc- kube-system ba385db9-9064-4a1c-b79f-e976220c4dbe 735 0 2025-05-15 12:13:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qrkcw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5bbea2adcf0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-" May 15 12:14:44.795019 containerd[1575]: 2025-05-15 12:14:44.711 [INFO][4708] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.795019 containerd[1575]: 2025-05-15 12:14:44.743 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" HandleID="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Workload="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.753 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" HandleID="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Workload="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qrkcw", "timestamp":"2025-05-15 12:14:44.743921253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.753 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.753 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.753 [INFO][4718] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.755 [INFO][4718] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" host="localhost" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.760 [INFO][4718] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.763 [INFO][4718] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.765 [INFO][4718] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.767 [INFO][4718] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:44.795321 containerd[1575]: 2025-05-15 12:14:44.767 [INFO][4718] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" host="localhost" May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.769 [INFO][4718] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584 May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.772 [INFO][4718] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" host="localhost" May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.777 [INFO][4718] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" host="localhost" May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.777 [INFO][4718] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" host="localhost" May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.777 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:44.795888 containerd[1575]: 2025-05-15 12:14:44.777 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" HandleID="k8s-pod-network.fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Workload="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.796094 containerd[1575]: 2025-05-15 12:14:44.780 [INFO][4708] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba385db9-9064-4a1c-b79f-e976220c4dbe", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qrkcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbea2adcf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:44.796231 containerd[1575]: 2025-05-15 12:14:44.780 [INFO][4708] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.796231 containerd[1575]: 2025-05-15 12:14:44.780 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bbea2adcf0 ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.796231 containerd[1575]: 2025-05-15 12:14:44.783 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.796345 containerd[1575]: 2025-05-15 12:14:44.783 [INFO][4708] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba385db9-9064-4a1c-b79f-e976220c4dbe", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584", Pod:"coredns-668d6bf9bc-qrkcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbea2adcf0", MAC:"66:fb:9f:67:df:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:44.796345 containerd[1575]: 2025-05-15 12:14:44.791 [INFO][4708] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" Namespace="kube-system" Pod="coredns-668d6bf9bc-qrkcw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qrkcw-eth0" May 15 12:14:44.984757 containerd[1575]: time="2025-05-15T12:14:44.984692267Z" level=info msg="connecting to shim fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584" address="unix:///run/containerd/s/9f70978bbd8370e09a5af9498889861e2736596f3d6d405d4769fcbc6916f937" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:45.012884 systemd[1]: Started cri-containerd-fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584.scope - libcontainer container fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584. May 15 12:14:45.026723 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:45.060340 containerd[1575]: time="2025-05-15T12:14:45.060184583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrkcw,Uid:ba385db9-9064-4a1c-b79f-e976220c4dbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584\"" May 15 12:14:45.061171 kubelet[2713]: E0515 12:14:45.061135 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:45.063966 containerd[1575]: time="2025-05-15T12:14:45.063914796Z" level=info msg="CreateContainer within sandbox \"fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:14:45.081885 containerd[1575]: time="2025-05-15T12:14:45.081824267Z" level=info msg="Container 8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:45.084174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545216577.mount: Deactivated successfully. May 15 12:14:45.090845 containerd[1575]: time="2025-05-15T12:14:45.090787844Z" level=info msg="CreateContainer within sandbox \"fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f\"" May 15 12:14:45.091378 containerd[1575]: time="2025-05-15T12:14:45.091339734Z" level=info msg="StartContainer for \"8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f\"" May 15 12:14:45.092272 containerd[1575]: time="2025-05-15T12:14:45.092234990Z" level=info msg="connecting to shim 8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f" address="unix:///run/containerd/s/9f70978bbd8370e09a5af9498889861e2736596f3d6d405d4769fcbc6916f937" protocol=ttrpc version=3 May 15 12:14:45.112967 systemd[1]: Started cri-containerd-8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f.scope - libcontainer container 8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f. May 15 12:14:45.151525 containerd[1575]: time="2025-05-15T12:14:45.151474326Z" level=info msg="StartContainer for \"8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f\" returns successfully" May 15 12:14:45.312881 kubelet[2713]: E0515 12:14:45.312764 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:45.327275 kubelet[2713]: I0515 12:14:45.327195 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qrkcw" podStartSLOduration=58.327175877 podStartE2EDuration="58.327175877s" podCreationTimestamp="2025-05-15 12:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:14:45.326945251 +0000 UTC m=+63.810437044" watchObservedRunningTime="2025-05-15 12:14:45.327175877 +0000 UTC m=+63.810667670" May 15 12:14:46.314855 kubelet[2713]: E0515 12:14:46.314788 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:46.512982 systemd-networkd[1509]: cali5bbea2adcf0: Gained IPv6LL May 15 12:14:46.666146 kubelet[2713]: E0515 12:14:46.666093 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:46.666668 containerd[1575]: time="2025-05-15T12:14:46.666596189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,}" May 15 12:14:46.748203 kubelet[2713]: I0515 12:14:46.748154 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:14:47.024399 systemd-networkd[1509]: cali389c8ac084e: Link UP May 15 12:14:47.024710 systemd-networkd[1509]: cali389c8ac084e: Gained carrier May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.720 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0 coredns-668d6bf9bc- kube-system b38549eb-99b5-40a2-ae9b-05f5e9ea660b 729 0 2025-05-15 12:13:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-q5qsh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali389c8ac084e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.720 [INFO][4826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.751 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" HandleID="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Workload="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.761 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" HandleID="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Workload="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308a30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-q5qsh", "timestamp":"2025-05-15 12:14:46.75098714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.761 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.761 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.761 [INFO][4842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.763 [INFO][4842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.861 [INFO][4842] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.866 [INFO][4842] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.868 [INFO][4842] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.871 [INFO][4842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.871 [INFO][4842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.873 [INFO][4842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:46.880 [INFO][4842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:47.016 [INFO][4842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:47.016 [INFO][4842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" host="localhost" May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:47.016 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:14:47.088116 containerd[1575]: 2025-05-15 12:14:47.016 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" HandleID="k8s-pod-network.169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Workload="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.020 [INFO][4826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b38549eb-99b5-40a2-ae9b-05f5e9ea660b", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-q5qsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali389c8ac084e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.020 [INFO][4826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.020 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali389c8ac084e ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.023 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.023 [INFO][4826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b38549eb-99b5-40a2-ae9b-05f5e9ea660b", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 13, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b", Pod:"coredns-668d6bf9bc-q5qsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali389c8ac084e", MAC:"da:75:61:1e:13:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:14:47.089398 containerd[1575]: 2025-05-15 12:14:47.077 [INFO][4826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5qsh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5qsh-eth0" May 15 12:14:47.316866 kubelet[2713]: E0515 12:14:47.316756 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:47.445672 containerd[1575]: time="2025-05-15T12:14:47.445589350Z" level=info msg="connecting to shim 169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b" address="unix:///run/containerd/s/415efaa3f12004359ac8ddc527b08d82a6f60a82169eb80823f8ce74f55b2409" namespace=k8s.io protocol=ttrpc version=3 May 15 12:14:47.500871 systemd[1]: Started cri-containerd-169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b.scope - libcontainer container 169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b. May 15 12:14:47.521259 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 12:14:47.594888 containerd[1575]: time="2025-05-15T12:14:47.594844657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5qsh,Uid:b38549eb-99b5-40a2-ae9b-05f5e9ea660b,Namespace:kube-system,Attempt:0,} returns sandbox id \"169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b\"" May 15 12:14:47.596414 kubelet[2713]: E0515 12:14:47.595928 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:47.598237 containerd[1575]: time="2025-05-15T12:14:47.598195502Z" level=info msg="CreateContainer within sandbox \"169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:14:47.616982 containerd[1575]: time="2025-05-15T12:14:47.614887785Z" level=info msg="Container c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:47.641704 containerd[1575]: time="2025-05-15T12:14:47.641608878Z" level=info msg="CreateContainer within sandbox \"169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f\"" May 15 12:14:47.644701 containerd[1575]: time="2025-05-15T12:14:47.643080826Z" level=info msg="StartContainer for \"c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f\"" May 15 12:14:47.645586 containerd[1575]: time="2025-05-15T12:14:47.645542600Z" level=info msg="connecting to shim c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f" address="unix:///run/containerd/s/415efaa3f12004359ac8ddc527b08d82a6f60a82169eb80823f8ce74f55b2409" protocol=ttrpc version=3 May 15 12:14:47.673857 systemd[1]: Started cri-containerd-c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f.scope - libcontainer container c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f. May 15 12:14:48.111760 containerd[1575]: time="2025-05-15T12:14:48.111702762Z" level=info msg="StartContainer for \"c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f\" returns successfully" May 15 12:14:48.320239 kubelet[2713]: E0515 12:14:48.320204 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:48.405238 containerd[1575]: time="2025-05-15T12:14:48.405092326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:48.463052 containerd[1575]: time="2025-05-15T12:14:48.462964709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 12:14:48.500136 containerd[1575]: time="2025-05-15T12:14:48.500061521Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:48.543545 containerd[1575]: time="2025-05-15T12:14:48.543466790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:48.544171 containerd[1575]: time="2025-05-15T12:14:48.544123269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 6.468124582s" May 15 12:14:48.544171 containerd[1575]: time="2025-05-15T12:14:48.544167375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 12:14:48.559416 containerd[1575]: time="2025-05-15T12:14:48.559154303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 12:14:48.566024 containerd[1575]: time="2025-05-15T12:14:48.565975483Z" level=info msg="CreateContainer within sandbox \"713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:14:48.612299 kubelet[2713]: I0515 12:14:48.612203 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q5qsh" podStartSLOduration=61.612185155 podStartE2EDuration="1m1.612185155s" podCreationTimestamp="2025-05-15 12:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:14:48.46668916 +0000 UTC m=+66.950180973" watchObservedRunningTime="2025-05-15 12:14:48.612185155 +0000 UTC m=+67.095676948" May 15 12:14:48.632048 containerd[1575]: time="2025-05-15T12:14:48.630494789Z" level=info msg="Container 39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:48.642320 containerd[1575]: time="2025-05-15T12:14:48.642262594Z" level=info msg="CreateContainer within sandbox \"713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\"" May 15 12:14:48.643368 containerd[1575]: time="2025-05-15T12:14:48.643340337Z" level=info msg="StartContainer for \"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\"" May 15 12:14:48.645038 containerd[1575]: time="2025-05-15T12:14:48.645017841Z" level=info msg="connecting to shim 39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5" address="unix:///run/containerd/s/2e46afd3885c687e9cd6d6626a95c1a417689a05da9c23b8527478be76841109" protocol=ttrpc version=3 May 15 12:14:48.670830 systemd[1]: Started cri-containerd-39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5.scope - libcontainer container 39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5. May 15 12:14:48.737681 containerd[1575]: time="2025-05-15T12:14:48.737602054Z" level=info msg="StartContainer for \"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" returns successfully" May 15 12:14:48.752958 systemd-networkd[1509]: cali389c8ac084e: Gained IPv6LL May 15 12:14:48.826983 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:58336.service - OpenSSH per-connection server daemon (10.0.0.1:58336). May 15 12:14:48.887564 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 58336 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:48.889703 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:48.898395 systemd-logind[1559]: New session 12 of user core. May 15 12:14:48.901886 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:14:49.049307 sshd[4997]: Connection closed by 10.0.0.1 port 58336 May 15 12:14:49.049806 sshd-session[4995]: pam_unix(sshd:session): session closed for user core May 15 12:14:49.060693 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:58336.service: Deactivated successfully. May 15 12:14:49.063513 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:14:49.064783 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. May 15 12:14:49.071161 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:58350.service - OpenSSH per-connection server daemon (10.0.0.1:58350). May 15 12:14:49.072234 systemd-logind[1559]: Removed session 12. May 15 12:14:49.131234 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 58350 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:49.133267 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:49.139040 systemd-logind[1559]: New session 13 of user core. May 15 12:14:49.150998 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:14:49.324313 kubelet[2713]: E0515 12:14:49.324159 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:49.516445 kubelet[2713]: I0515 12:14:49.516349 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58bb49fdbb-gjpfj" podStartSLOduration=42.007150384 podStartE2EDuration="55.516327328s" podCreationTimestamp="2025-05-15 12:13:54 +0000 UTC" firstStartedPulling="2025-05-15 12:14:35.049344135 +0000 UTC m=+53.532835928" lastFinishedPulling="2025-05-15 12:14:48.558521079 +0000 UTC m=+67.042012872" observedRunningTime="2025-05-15 12:14:49.515082283 +0000 UTC m=+67.998574076" watchObservedRunningTime="2025-05-15 12:14:49.516327328 +0000 UTC m=+67.999819121" May 15 12:14:49.521751 sshd[5013]: Connection closed by 10.0.0.1 port 58350 May 15 12:14:49.523567 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 15 12:14:49.535100 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:58350.service: Deactivated successfully. May 15 12:14:49.539338 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:14:49.542522 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. May 15 12:14:49.550366 systemd-logind[1559]: Removed session 13. May 15 12:14:49.554067 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:58352.service - OpenSSH per-connection server daemon (10.0.0.1:58352). May 15 12:14:49.604429 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 58352 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:49.606001 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:49.611097 systemd-logind[1559]: New session 14 of user core. May 15 12:14:49.622936 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:14:49.753082 sshd[5026]: Connection closed by 10.0.0.1 port 58352 May 15 12:14:49.753414 sshd-session[5024]: pam_unix(sshd:session): session closed for user core May 15 12:14:49.757945 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:58352.service: Deactivated successfully. May 15 12:14:49.759870 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:14:49.760791 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. May 15 12:14:49.762721 systemd-logind[1559]: Removed session 14. May 15 12:14:50.326548 kubelet[2713]: E0515 12:14:50.326512 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:50.400299 containerd[1575]: time="2025-05-15T12:14:50.400247087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"2e99dea048a0f4074fb0bb09fd41992ec611ebcd63a856ec9eb9718e831e4bec\" pid:5051 exited_at:{seconds:1747311290 nanos:399875981}" May 15 12:14:51.690396 containerd[1575]: time="2025-05-15T12:14:51.690325121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:51.691150 containerd[1575]: time="2025-05-15T12:14:51.691114674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 12:14:51.692285 containerd[1575]: time="2025-05-15T12:14:51.692232060Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:51.694263 containerd[1575]: time="2025-05-15T12:14:51.694226377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:51.695030 containerd[1575]: time="2025-05-15T12:14:51.695002885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.135800409s" May 15 12:14:51.695030 containerd[1575]: time="2025-05-15T12:14:51.695035818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 12:14:51.697192 containerd[1575]: time="2025-05-15T12:14:51.697131942Z" level=info msg="CreateContainer within sandbox \"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 12:14:51.883731 containerd[1575]: time="2025-05-15T12:14:51.883629618Z" level=info msg="Container 6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:51.928970 containerd[1575]: time="2025-05-15T12:14:51.928905537Z" level=info msg="CreateContainer within sandbox \"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8\"" May 15 12:14:51.929612 containerd[1575]: time="2025-05-15T12:14:51.929535151Z" level=info msg="StartContainer for \"6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8\"" May 15 12:14:51.931454 containerd[1575]: time="2025-05-15T12:14:51.931418324Z" level=info msg="connecting to shim 6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8" address="unix:///run/containerd/s/9ded0d2eb5ca9f21842b79e89ddd354c3bcf92a39b4a8235fba914ff8e140ab4" protocol=ttrpc version=3 May 15 12:14:51.955115 systemd[1]: Started cri-containerd-6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8.scope - libcontainer container 6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8. May 15 12:14:52.202572 containerd[1575]: time="2025-05-15T12:14:52.202520203Z" level=info msg="StartContainer for \"6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8\" returns successfully" May 15 12:14:52.204926 containerd[1575]: time="2025-05-15T12:14:52.204887396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 12:14:54.666149 kubelet[2713]: E0515 12:14:54.666108 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:54.773320 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:57326.service - OpenSSH per-connection server daemon (10.0.0.1:57326). May 15 12:14:54.830331 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 57326 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:14:54.831956 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:14:54.836632 systemd-logind[1559]: New session 15 of user core. May 15 12:14:54.847909 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:14:54.974949 sshd[5110]: Connection closed by 10.0.0.1 port 57326 May 15 12:14:54.975195 sshd-session[5108]: pam_unix(sshd:session): session closed for user core May 15 12:14:54.979512 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:57326.service: Deactivated successfully. May 15 12:14:54.981480 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:14:54.982384 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. May 15 12:14:54.983781 systemd-logind[1559]: Removed session 15. May 15 12:14:55.866877 containerd[1575]: time="2025-05-15T12:14:55.866820610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:55.867903 containerd[1575]: time="2025-05-15T12:14:55.867803701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 12:14:55.869421 containerd[1575]: time="2025-05-15T12:14:55.869392617Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:55.871718 containerd[1575]: time="2025-05-15T12:14:55.871664327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:14:55.872217 containerd[1575]: time="2025-05-15T12:14:55.872191561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.6672649s" May 15 12:14:55.872217 containerd[1575]: time="2025-05-15T12:14:55.872223232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 12:14:55.888456 containerd[1575]: time="2025-05-15T12:14:55.888411771Z" level=info msg="CreateContainer within sandbox \"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 12:14:55.896147 containerd[1575]: time="2025-05-15T12:14:55.896108186Z" level=info msg="Container 37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c: CDI devices from CRI Config.CDIDevices: []" May 15 12:14:55.906882 containerd[1575]: time="2025-05-15T12:14:55.906835008Z" level=info msg="CreateContainer within sandbox \"15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c\"" May 15 12:14:55.907298 containerd[1575]: time="2025-05-15T12:14:55.907272229Z" level=info msg="StartContainer for \"37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c\"" May 15 12:14:55.908635 containerd[1575]: time="2025-05-15T12:14:55.908608389Z" level=info msg="connecting to shim 37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c" address="unix:///run/containerd/s/9ded0d2eb5ca9f21842b79e89ddd354c3bcf92a39b4a8235fba914ff8e140ab4" protocol=ttrpc version=3 May 15 12:14:55.935830 systemd[1]: Started cri-containerd-37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c.scope - libcontainer container 37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c. May 15 12:14:55.978941 containerd[1575]: time="2025-05-15T12:14:55.978905147Z" level=info msg="StartContainer for \"37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c\" returns successfully" May 15 12:14:56.270339 kubelet[2713]: I0515 12:14:56.270230 2713 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 12:14:56.270339 kubelet[2713]: I0515 12:14:56.270263 2713 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 12:14:56.371384 kubelet[2713]: I0515 12:14:56.371322 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c7xqh" podStartSLOduration=50.574576658 podStartE2EDuration="1m3.371306008s" podCreationTimestamp="2025-05-15 12:13:53 +0000 UTC" firstStartedPulling="2025-05-15 12:14:43.082982256 +0000 UTC m=+61.566474049" lastFinishedPulling="2025-05-15 12:14:55.879711606 +0000 UTC m=+74.363203399" observedRunningTime="2025-05-15 12:14:56.370336705 +0000 UTC m=+74.853828498" watchObservedRunningTime="2025-05-15 12:14:56.371306008 +0000 UTC m=+74.854797791" May 15 12:14:58.666868 kubelet[2713]: E0515 12:14:58.666822 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:14:59.997311 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:57334.service - OpenSSH per-connection server daemon (10.0.0.1:57334). May 15 12:15:00.058389 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 57334 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:00.060345 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:00.067019 systemd-logind[1559]: New session 16 of user core. May 15 12:15:00.078904 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:15:00.244482 sshd[5169]: Connection closed by 10.0.0.1 port 57334 May 15 12:15:00.244818 sshd-session[5167]: pam_unix(sshd:session): session closed for user core May 15 12:15:00.248607 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:57334.service: Deactivated successfully. May 15 12:15:00.250672 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:15:00.251525 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. May 15 12:15:00.252756 systemd-logind[1559]: Removed session 16. May 15 12:15:03.410247 containerd[1575]: time="2025-05-15T12:15:03.410182410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"f44ee82c710b0b12d7a87e6f9a3423ec4874c897ffbc9f1374361e4e15c59a60\" pid:5193 exit_status:1 exited_at:{seconds:1747311303 nanos:409797252}" May 15 12:15:04.666120 kubelet[2713]: E0515 12:15:04.666054 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:15:05.259772 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:37870.service - OpenSSH per-connection server daemon (10.0.0.1:37870). May 15 12:15:05.307535 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 37870 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:05.309234 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:05.313614 systemd-logind[1559]: New session 17 of user core. May 15 12:15:05.330807 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:15:05.477911 sshd[5208]: Connection closed by 10.0.0.1 port 37870 May 15 12:15:05.478240 sshd-session[5206]: pam_unix(sshd:session): session closed for user core May 15 12:15:05.483589 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:37870.service: Deactivated successfully. May 15 12:15:05.485818 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:15:05.486695 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. May 15 12:15:05.488019 systemd-logind[1559]: Removed session 17. May 15 12:15:06.277132 containerd[1575]: time="2025-05-15T12:15:06.277091795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"1a3441a57ad3f47a89e84bc10fd4fe00a0979cbff2bf8acd2ff79bf495fbda7b\" pid:5232 exited_at:{seconds:1747311306 nanos:276932882}" May 15 12:15:08.666223 kubelet[2713]: E0515 12:15:08.666169 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:15:10.496006 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:37886.service - OpenSSH per-connection server daemon (10.0.0.1:37886). May 15 12:15:10.547804 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 37886 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:10.549394 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:10.553985 systemd-logind[1559]: New session 18 of user core. May 15 12:15:10.563851 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:15:10.720072 sshd[5247]: Connection closed by 10.0.0.1 port 37886 May 15 12:15:10.720407 sshd-session[5245]: pam_unix(sshd:session): session closed for user core May 15 12:15:10.724999 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:37886.service: Deactivated successfully. May 15 12:15:10.727187 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:15:10.728179 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. May 15 12:15:10.729488 systemd-logind[1559]: Removed session 18. May 15 12:15:15.733109 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:43462.service - OpenSSH per-connection server daemon (10.0.0.1:43462). May 15 12:15:15.785718 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 43462 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:15.787445 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:15.792907 systemd-logind[1559]: New session 19 of user core. May 15 12:15:15.806955 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:15:15.937294 sshd[5268]: Connection closed by 10.0.0.1 port 43462 May 15 12:15:15.937773 sshd-session[5266]: pam_unix(sshd:session): session closed for user core May 15 12:15:15.942214 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:43462.service: Deactivated successfully. May 15 12:15:15.944255 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:15:15.945074 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. May 15 12:15:15.946384 systemd-logind[1559]: Removed session 19. May 15 12:15:20.386399 containerd[1575]: time="2025-05-15T12:15:20.386338687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"0e529b3c9c667726090b4c57824349872bf1df32c0a21091fd416d8357469f0a\" pid:5296 exited_at:{seconds:1747311320 nanos:385975477}" May 15 12:15:20.954929 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:43476.service - OpenSSH per-connection server daemon (10.0.0.1:43476). May 15 12:15:21.017370 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 43476 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:21.019028 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:21.023540 systemd-logind[1559]: New session 20 of user core. May 15 12:15:21.030803 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:15:21.212351 sshd[5310]: Connection closed by 10.0.0.1 port 43476 May 15 12:15:21.212625 sshd-session[5308]: pam_unix(sshd:session): session closed for user core May 15 12:15:21.217492 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:43476.service: Deactivated successfully. May 15 12:15:21.220147 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:15:21.222979 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. May 15 12:15:21.225693 systemd-logind[1559]: Removed session 20. May 15 12:15:26.231227 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:48354.service - OpenSSH per-connection server daemon (10.0.0.1:48354). May 15 12:15:26.276635 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 48354 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:26.278379 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:26.283096 systemd-logind[1559]: New session 21 of user core. May 15 12:15:26.293834 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:15:26.415951 sshd[5325]: Connection closed by 10.0.0.1 port 48354 May 15 12:15:26.416345 sshd-session[5323]: pam_unix(sshd:session): session closed for user core May 15 12:15:26.420679 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:48354.service: Deactivated successfully. May 15 12:15:26.423098 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:15:26.424090 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. May 15 12:15:26.425814 systemd-logind[1559]: Removed session 21. May 15 12:15:31.433266 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:48370.service - OpenSSH per-connection server daemon (10.0.0.1:48370). May 15 12:15:31.485436 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 48370 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:31.487030 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:31.491074 systemd-logind[1559]: New session 22 of user core. May 15 12:15:31.499775 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:15:31.617762 sshd[5342]: Connection closed by 10.0.0.1 port 48370 May 15 12:15:31.618101 sshd-session[5340]: pam_unix(sshd:session): session closed for user core May 15 12:15:31.622377 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:48370.service: Deactivated successfully. May 15 12:15:31.624765 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:15:31.625739 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. May 15 12:15:31.627550 systemd-logind[1559]: Removed session 22. May 15 12:15:33.388596 containerd[1575]: time="2025-05-15T12:15:33.388506090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"7524e9eed7df3c543ee1e03f949e96ab1340fc2b4860a53b78a99667ebcdace3\" pid:5366 exited_at:{seconds:1747311333 nanos:388188768}" May 15 12:15:33.390774 kubelet[2713]: E0515 12:15:33.390750 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:15:36.635612 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:40436.service - OpenSSH per-connection server daemon (10.0.0.1:40436). May 15 12:15:36.715080 sshd[5379]: Accepted publickey for core from 10.0.0.1 port 40436 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:36.716739 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:36.721857 systemd-logind[1559]: New session 23 of user core. May 15 12:15:36.731813 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:15:36.874311 sshd[5381]: Connection closed by 10.0.0.1 port 40436 May 15 12:15:36.874712 sshd-session[5379]: pam_unix(sshd:session): session closed for user core May 15 12:15:36.879611 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:40436.service: Deactivated successfully. May 15 12:15:36.881741 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:15:36.882501 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. May 15 12:15:36.883734 systemd-logind[1559]: Removed session 23. May 15 12:15:41.895011 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:40452.service - OpenSSH per-connection server daemon (10.0.0.1:40452). May 15 12:15:41.950050 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 40452 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:41.951870 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:41.957900 systemd-logind[1559]: New session 24 of user core. May 15 12:15:41.963875 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 12:15:42.092269 sshd[5398]: Connection closed by 10.0.0.1 port 40452 May 15 12:15:42.092733 sshd-session[5396]: pam_unix(sshd:session): session closed for user core May 15 12:15:42.096591 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:40452.service: Deactivated successfully. May 15 12:15:42.098472 systemd[1]: session-24.scope: Deactivated successfully. May 15 12:15:42.099270 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. May 15 12:15:42.100501 systemd-logind[1559]: Removed session 24. May 15 12:15:47.110267 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:34028.service - OpenSSH per-connection server daemon (10.0.0.1:34028). May 15 12:15:47.159509 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 34028 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:47.161214 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:47.166167 systemd-logind[1559]: New session 25 of user core. May 15 12:15:47.172833 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 12:15:47.293343 sshd[5413]: Connection closed by 10.0.0.1 port 34028 May 15 12:15:47.293780 sshd-session[5411]: pam_unix(sshd:session): session closed for user core May 15 12:15:47.298006 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:34028.service: Deactivated successfully. May 15 12:15:47.300239 systemd[1]: session-25.scope: Deactivated successfully. May 15 12:15:47.301170 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. May 15 12:15:47.302586 systemd-logind[1559]: Removed session 25. May 15 12:15:47.669216 kubelet[2713]: E0515 12:15:47.669155 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:15:50.394323 containerd[1575]: time="2025-05-15T12:15:50.394274257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"f3e57acf82bcd43c859cd6492c7857f7b9777a058db53e1b23166b9f52ff5162\" pid:5439 exited_at:{seconds:1747311350 nanos:393955514}" May 15 12:15:52.309820 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:34040.service - OpenSSH per-connection server daemon (10.0.0.1:34040). May 15 12:15:52.347052 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 34040 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:52.391390 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:52.396153 systemd-logind[1559]: New session 26 of user core. May 15 12:15:52.411907 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 12:15:52.533055 sshd[5452]: Connection closed by 10.0.0.1 port 34040 May 15 12:15:52.533370 sshd-session[5450]: pam_unix(sshd:session): session closed for user core May 15 12:15:52.537870 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. May 15 12:15:52.538215 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:34040.service: Deactivated successfully. May 15 12:15:52.540439 systemd[1]: session-26.scope: Deactivated successfully. May 15 12:15:52.541900 systemd-logind[1559]: Removed session 26. May 15 12:15:57.550340 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:55332.service - OpenSSH per-connection server daemon (10.0.0.1:55332). May 15 12:15:57.602109 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 55332 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:15:57.603807 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:15:57.608250 systemd-logind[1559]: New session 27 of user core. May 15 12:15:57.617877 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 12:15:57.735709 sshd[5473]: Connection closed by 10.0.0.1 port 55332 May 15 12:15:57.736045 sshd-session[5471]: pam_unix(sshd:session): session closed for user core May 15 12:15:57.740174 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:55332.service: Deactivated successfully. May 15 12:15:57.742449 systemd[1]: session-27.scope: Deactivated successfully. May 15 12:15:57.743354 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. May 15 12:15:57.744824 systemd-logind[1559]: Removed session 27. May 15 12:16:00.666171 kubelet[2713]: E0515 12:16:00.666107 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:02.753734 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:55346.service - OpenSSH per-connection server daemon (10.0.0.1:55346). May 15 12:16:02.810554 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 55346 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:02.812300 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:02.817223 systemd-logind[1559]: New session 28 of user core. May 15 12:16:02.830862 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 12:16:02.946227 sshd[5489]: Connection closed by 10.0.0.1 port 55346 May 15 12:16:02.946629 sshd-session[5487]: pam_unix(sshd:session): session closed for user core May 15 12:16:02.951994 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:55346.service: Deactivated successfully. May 15 12:16:02.954120 systemd[1]: session-28.scope: Deactivated successfully. May 15 12:16:02.955233 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. May 15 12:16:02.957069 systemd-logind[1559]: Removed session 28. May 15 12:16:03.332243 containerd[1575]: time="2025-05-15T12:16:03.332197318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"44c9bdad80fb45fc40a480ccce5762d746b42a1a08e25f1ae5bf2b032cac5fa3\" pid:5515 exited_at:{seconds:1747311363 nanos:331888925}" May 15 12:16:03.666014 kubelet[2713]: E0515 12:16:03.665867 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:06.263993 containerd[1575]: time="2025-05-15T12:16:06.263886557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"b26098d6c5784dcaeec54d6a25e93281c3bce538a07134e4db54dd6c85e2a199\" pid:5539 exited_at:{seconds:1747311366 nanos:263662114}" May 15 12:16:07.963369 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:58252.service - OpenSSH per-connection server daemon (10.0.0.1:58252). May 15 12:16:08.009770 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 58252 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:08.011298 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:08.016224 systemd-logind[1559]: New session 29 of user core. May 15 12:16:08.025830 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 12:16:08.148047 sshd[5560]: Connection closed by 10.0.0.1 port 58252 May 15 12:16:08.148500 sshd-session[5557]: pam_unix(sshd:session): session closed for user core May 15 12:16:08.153371 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:58252.service: Deactivated successfully. May 15 12:16:08.156268 systemd[1]: session-29.scope: Deactivated successfully. May 15 12:16:08.157444 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. May 15 12:16:08.159259 systemd-logind[1559]: Removed session 29. May 15 12:16:11.668983 kubelet[2713]: E0515 12:16:11.668918 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:13.166214 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:58264.service - OpenSSH per-connection server daemon (10.0.0.1:58264). May 15 12:16:13.313721 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 58264 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:13.315120 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:13.319548 systemd-logind[1559]: New session 30 of user core. May 15 12:16:13.328758 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 12:16:13.440174 sshd[5586]: Connection closed by 10.0.0.1 port 58264 May 15 12:16:13.440392 sshd-session[5584]: pam_unix(sshd:session): session closed for user core May 15 12:16:13.444178 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:58264.service: Deactivated successfully. May 15 12:16:13.446199 systemd[1]: session-30.scope: Deactivated successfully. May 15 12:16:13.447263 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. May 15 12:16:13.448900 systemd-logind[1559]: Removed session 30. May 15 12:16:14.666536 kubelet[2713]: E0515 12:16:14.666476 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:16.667002 kubelet[2713]: E0515 12:16:16.666938 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:16.667421 kubelet[2713]: E0515 12:16:16.667025 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:18.458098 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:60468.service - OpenSSH per-connection server daemon (10.0.0.1:60468). May 15 12:16:18.503796 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 60468 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:18.505323 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:18.509496 systemd-logind[1559]: New session 31 of user core. May 15 12:16:18.518820 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 12:16:18.638898 sshd[5603]: Connection closed by 10.0.0.1 port 60468 May 15 12:16:18.639211 sshd-session[5601]: pam_unix(sshd:session): session closed for user core May 15 12:16:18.642945 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:60468.service: Deactivated successfully. May 15 12:16:18.645040 systemd[1]: session-31.scope: Deactivated successfully. May 15 12:16:18.645925 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. May 15 12:16:18.647290 systemd-logind[1559]: Removed session 31. May 15 12:16:20.371015 containerd[1575]: time="2025-05-15T12:16:20.370969156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"67e00a2221f236396a82f541a63624d64ebb8c1e7d1583b066aef0fe4e709f21\" pid:5628 exited_at:{seconds:1747311380 nanos:370772606}" May 15 12:16:23.661022 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:35906.service - OpenSSH per-connection server daemon (10.0.0.1:35906). May 15 12:16:23.699882 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 35906 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:23.701274 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:23.705567 systemd-logind[1559]: New session 32 of user core. May 15 12:16:23.712839 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 12:16:23.835118 sshd[5641]: Connection closed by 10.0.0.1 port 35906 May 15 12:16:23.835580 sshd-session[5639]: pam_unix(sshd:session): session closed for user core May 15 12:16:23.840231 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:35906.service: Deactivated successfully. May 15 12:16:23.842615 systemd[1]: session-32.scope: Deactivated successfully. May 15 12:16:23.844345 systemd-logind[1559]: Session 32 logged out. Waiting for processes to exit. May 15 12:16:23.846420 systemd-logind[1559]: Removed session 32. May 15 12:16:28.849168 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:35914.service - OpenSSH per-connection server daemon (10.0.0.1:35914). May 15 12:16:28.908460 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 35914 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:28.910001 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:28.914278 systemd-logind[1559]: New session 33 of user core. May 15 12:16:28.924865 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 12:16:29.048700 sshd[5656]: Connection closed by 10.0.0.1 port 35914 May 15 12:16:29.049097 sshd-session[5654]: pam_unix(sshd:session): session closed for user core May 15 12:16:29.054954 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:35914.service: Deactivated successfully. May 15 12:16:29.058766 systemd[1]: session-33.scope: Deactivated successfully. May 15 12:16:29.059938 systemd-logind[1559]: Session 33 logged out. Waiting for processes to exit. May 15 12:16:29.061782 systemd-logind[1559]: Removed session 33. May 15 12:16:33.339060 containerd[1575]: time="2025-05-15T12:16:33.339011761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"d667b034853b0dbe32dfcaa5a8caf945da4464f8b2351553bcc1aded34fcc2c5\" pid:5680 exited_at:{seconds:1747311393 nanos:338511518}" May 15 12:16:34.064601 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). May 15 12:16:34.119721 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:34.121861 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:34.127735 systemd-logind[1559]: New session 34 of user core. May 15 12:16:34.142968 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 12:16:34.328537 sshd[5698]: Connection closed by 10.0.0.1 port 50744 May 15 12:16:34.328965 sshd-session[5696]: pam_unix(sshd:session): session closed for user core May 15 12:16:34.335901 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:50744.service: Deactivated successfully. May 15 12:16:34.338752 systemd[1]: session-34.scope: Deactivated successfully. May 15 12:16:34.340001 systemd-logind[1559]: Session 34 logged out. Waiting for processes to exit. May 15 12:16:34.341989 systemd-logind[1559]: Removed session 34. May 15 12:16:39.343407 systemd[1]: Started sshd@34-10.0.0.15:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). May 15 12:16:39.434319 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:39.436296 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:39.443611 systemd-logind[1559]: New session 35 of user core. May 15 12:16:39.456948 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 12:16:39.592300 sshd[5713]: Connection closed by 10.0.0.1 port 50746 May 15 12:16:39.592736 sshd-session[5711]: pam_unix(sshd:session): session closed for user core May 15 12:16:39.597197 systemd[1]: sshd@34-10.0.0.15:22-10.0.0.1:50746.service: Deactivated successfully. May 15 12:16:39.599386 systemd[1]: session-35.scope: Deactivated successfully. May 15 12:16:39.600296 systemd-logind[1559]: Session 35 logged out. Waiting for processes to exit. May 15 12:16:39.602203 systemd-logind[1559]: Removed session 35. May 15 12:16:44.606004 systemd[1]: Started sshd@35-10.0.0.15:22-10.0.0.1:53790.service - OpenSSH per-connection server daemon (10.0.0.1:53790). May 15 12:16:44.644096 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 53790 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:44.652555 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:44.658775 systemd-logind[1559]: New session 36 of user core. May 15 12:16:44.669065 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 12:16:44.787134 sshd[5730]: Connection closed by 10.0.0.1 port 53790 May 15 12:16:44.787483 sshd-session[5728]: pam_unix(sshd:session): session closed for user core May 15 12:16:44.791709 systemd[1]: sshd@35-10.0.0.15:22-10.0.0.1:53790.service: Deactivated successfully. May 15 12:16:44.793705 systemd[1]: session-36.scope: Deactivated successfully. May 15 12:16:44.794607 systemd-logind[1559]: Session 36 logged out. Waiting for processes to exit. May 15 12:16:44.795963 systemd-logind[1559]: Removed session 36. May 15 12:16:49.801738 systemd[1]: Started sshd@36-10.0.0.15:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). May 15 12:16:49.852918 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:49.854889 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:49.860065 systemd-logind[1559]: New session 37 of user core. May 15 12:16:49.871794 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 12:16:50.015800 sshd[5748]: Connection closed by 10.0.0.1 port 53800 May 15 12:16:50.016166 sshd-session[5746]: pam_unix(sshd:session): session closed for user core May 15 12:16:50.021417 systemd[1]: sshd@36-10.0.0.15:22-10.0.0.1:53800.service: Deactivated successfully. May 15 12:16:50.023916 systemd[1]: session-37.scope: Deactivated successfully. May 15 12:16:50.025029 systemd-logind[1559]: Session 37 logged out. Waiting for processes to exit. May 15 12:16:50.026792 systemd-logind[1559]: Removed session 37. May 15 12:16:50.366044 containerd[1575]: time="2025-05-15T12:16:50.365983592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"f557a41be781d22d174bcd5af769b297e809cd3ad320ca60a6d87e35deadf542\" pid:5773 exited_at:{seconds:1747311410 nanos:365764760}" May 15 12:16:52.666154 kubelet[2713]: E0515 12:16:52.666104 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:16:55.034396 systemd[1]: Started sshd@37-10.0.0.15:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). May 15 12:16:55.080891 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:16:55.083264 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:16:55.089318 systemd-logind[1559]: New session 38 of user core. May 15 12:16:55.098964 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 12:16:55.228708 sshd[5786]: Connection closed by 10.0.0.1 port 57162 May 15 12:16:55.229042 sshd-session[5784]: pam_unix(sshd:session): session closed for user core May 15 12:16:55.233757 systemd[1]: sshd@37-10.0.0.15:22-10.0.0.1:57162.service: Deactivated successfully. May 15 12:16:55.236136 systemd[1]: session-38.scope: Deactivated successfully. May 15 12:16:55.237934 systemd-logind[1559]: Session 38 logged out. Waiting for processes to exit. May 15 12:16:55.239863 systemd-logind[1559]: Removed session 38. May 15 12:16:57.666498 kubelet[2713]: E0515 12:16:57.666461 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:00.244266 systemd[1]: Started sshd@38-10.0.0.15:22-10.0.0.1:57178.service - OpenSSH per-connection server daemon (10.0.0.1:57178). May 15 12:17:00.297085 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 57178 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:00.299549 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:00.305220 systemd-logind[1559]: New session 39 of user core. May 15 12:17:00.320985 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 12:17:00.447457 sshd[5801]: Connection closed by 10.0.0.1 port 57178 May 15 12:17:00.447873 sshd-session[5799]: pam_unix(sshd:session): session closed for user core May 15 12:17:00.453435 systemd[1]: sshd@38-10.0.0.15:22-10.0.0.1:57178.service: Deactivated successfully. May 15 12:17:00.455967 systemd[1]: session-39.scope: Deactivated successfully. May 15 12:17:00.456967 systemd-logind[1559]: Session 39 logged out. Waiting for processes to exit. May 15 12:17:00.458769 systemd-logind[1559]: Removed session 39. May 15 12:17:03.335934 containerd[1575]: time="2025-05-15T12:17:03.335880119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"b0a0c1d4480e78e7f8dbf6f4f4f0f467079fde92e53e0b8e7baf3b98d06db501\" pid:5825 exited_at:{seconds:1747311423 nanos:335508844}" May 15 12:17:04.666466 kubelet[2713]: E0515 12:17:04.666415 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:05.463111 systemd[1]: Started sshd@39-10.0.0.15:22-10.0.0.1:35884.service - OpenSSH per-connection server daemon (10.0.0.1:35884). May 15 12:17:05.515148 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 35884 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:05.562354 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:05.567252 systemd-logind[1559]: New session 40 of user core. May 15 12:17:05.577853 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 12:17:05.691811 sshd[5840]: Connection closed by 10.0.0.1 port 35884 May 15 12:17:05.692167 sshd-session[5838]: pam_unix(sshd:session): session closed for user core May 15 12:17:05.696552 systemd[1]: sshd@39-10.0.0.15:22-10.0.0.1:35884.service: Deactivated successfully. May 15 12:17:05.698783 systemd[1]: session-40.scope: Deactivated successfully. May 15 12:17:05.699672 systemd-logind[1559]: Session 40 logged out. Waiting for processes to exit. May 15 12:17:05.700950 systemd-logind[1559]: Removed session 40. May 15 12:17:06.261538 containerd[1575]: time="2025-05-15T12:17:06.261462940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"df77e169c86723f10726cf430b8dd51f2a1792eb82ea537553b1ef02dab3f1ff\" pid:5864 exited_at:{seconds:1747311426 nanos:261219633}" May 15 12:17:10.713632 systemd[1]: Started sshd@40-10.0.0.15:22-10.0.0.1:35890.service - OpenSSH per-connection server daemon (10.0.0.1:35890). May 15 12:17:10.761723 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 35890 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:10.838446 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:10.842829 systemd-logind[1559]: New session 41 of user core. May 15 12:17:10.851783 systemd[1]: Started session-41.scope - Session 41 of User core. May 15 12:17:10.958267 sshd[5877]: Connection closed by 10.0.0.1 port 35890 May 15 12:17:10.958588 sshd-session[5875]: pam_unix(sshd:session): session closed for user core May 15 12:17:10.962201 systemd[1]: sshd@40-10.0.0.15:22-10.0.0.1:35890.service: Deactivated successfully. May 15 12:17:10.964110 systemd[1]: session-41.scope: Deactivated successfully. May 15 12:17:10.964988 systemd-logind[1559]: Session 41 logged out. Waiting for processes to exit. May 15 12:17:10.966113 systemd-logind[1559]: Removed session 41. May 15 12:17:15.977250 systemd[1]: Started sshd@41-10.0.0.15:22-10.0.0.1:35844.service - OpenSSH per-connection server daemon (10.0.0.1:35844). May 15 12:17:16.024106 sshd[5896]: Accepted publickey for core from 10.0.0.1 port 35844 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:16.026181 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:16.031504 systemd-logind[1559]: New session 42 of user core. May 15 12:17:16.036859 systemd[1]: Started session-42.scope - Session 42 of User core. May 15 12:17:16.161383 sshd[5898]: Connection closed by 10.0.0.1 port 35844 May 15 12:17:16.161721 sshd-session[5896]: pam_unix(sshd:session): session closed for user core May 15 12:17:16.167085 systemd[1]: sshd@41-10.0.0.15:22-10.0.0.1:35844.service: Deactivated successfully. May 15 12:17:16.169342 systemd[1]: session-42.scope: Deactivated successfully. May 15 12:17:16.170310 systemd-logind[1559]: Session 42 logged out. Waiting for processes to exit. May 15 12:17:16.171781 systemd-logind[1559]: Removed session 42. May 15 12:17:19.666784 kubelet[2713]: E0515 12:17:19.666716 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:20.368025 containerd[1575]: time="2025-05-15T12:17:20.367967765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"49630e98309983dd0fb309b4fb35662424d9a2e0d4a26d5cecdf37e67f7e3b11\" pid:5925 exited_at:{seconds:1747311440 nanos:367687769}" May 15 12:17:20.666453 kubelet[2713]: E0515 12:17:20.666326 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:21.180031 systemd[1]: Started sshd@42-10.0.0.15:22-10.0.0.1:35846.service - OpenSSH per-connection server daemon (10.0.0.1:35846). May 15 12:17:21.224877 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 35846 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:21.226610 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:21.231159 systemd-logind[1559]: New session 43 of user core. May 15 12:17:21.240823 systemd[1]: Started session-43.scope - Session 43 of User core. May 15 12:17:21.364427 sshd[5938]: Connection closed by 10.0.0.1 port 35846 May 15 12:17:21.364826 sshd-session[5936]: pam_unix(sshd:session): session closed for user core May 15 12:17:21.370302 systemd[1]: sshd@42-10.0.0.15:22-10.0.0.1:35846.service: Deactivated successfully. May 15 12:17:21.373072 systemd[1]: session-43.scope: Deactivated successfully. May 15 12:17:21.374009 systemd-logind[1559]: Session 43 logged out. Waiting for processes to exit. May 15 12:17:21.376118 systemd-logind[1559]: Removed session 43. May 15 12:17:24.666106 kubelet[2713]: E0515 12:17:24.666042 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:26.378926 systemd[1]: Started sshd@43-10.0.0.15:22-10.0.0.1:43986.service - OpenSSH per-connection server daemon (10.0.0.1:43986). May 15 12:17:26.432765 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 43986 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:26.434563 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:26.439702 systemd-logind[1559]: New session 44 of user core. May 15 12:17:26.453948 systemd[1]: Started session-44.scope - Session 44 of User core. May 15 12:17:26.574756 sshd[5954]: Connection closed by 10.0.0.1 port 43986 May 15 12:17:26.575193 sshd-session[5952]: pam_unix(sshd:session): session closed for user core May 15 12:17:26.580575 systemd[1]: sshd@43-10.0.0.15:22-10.0.0.1:43986.service: Deactivated successfully. May 15 12:17:26.583380 systemd[1]: session-44.scope: Deactivated successfully. May 15 12:17:26.584553 systemd-logind[1559]: Session 44 logged out. Waiting for processes to exit. May 15 12:17:26.586321 systemd-logind[1559]: Removed session 44. May 15 12:17:29.667150 kubelet[2713]: E0515 12:17:29.667030 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:31.589492 systemd[1]: Started sshd@44-10.0.0.15:22-10.0.0.1:43994.service - OpenSSH per-connection server daemon (10.0.0.1:43994). May 15 12:17:31.644192 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 43994 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:31.645757 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:31.650579 systemd-logind[1559]: New session 45 of user core. May 15 12:17:31.663923 systemd[1]: Started session-45.scope - Session 45 of User core. May 15 12:17:31.798954 sshd[5971]: Connection closed by 10.0.0.1 port 43994 May 15 12:17:31.799350 sshd-session[5969]: pam_unix(sshd:session): session closed for user core May 15 12:17:31.803827 systemd[1]: sshd@44-10.0.0.15:22-10.0.0.1:43994.service: Deactivated successfully. May 15 12:17:31.805804 systemd[1]: session-45.scope: Deactivated successfully. May 15 12:17:31.806803 systemd-logind[1559]: Session 45 logged out. Waiting for processes to exit. May 15 12:17:31.808182 systemd-logind[1559]: Removed session 45. May 15 12:17:33.338758 containerd[1575]: time="2025-05-15T12:17:33.338689077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"b21d1386e1a81151ca88fab20b920d3749982699d82c23f912eb33485a049ca5\" pid:5995 exited_at:{seconds:1747311453 nanos:338300023}" May 15 12:17:33.666849 kubelet[2713]: E0515 12:17:33.666718 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:17:36.815602 systemd[1]: Started sshd@45-10.0.0.15:22-10.0.0.1:41492.service - OpenSSH per-connection server daemon (10.0.0.1:41492). May 15 12:17:36.870850 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 41492 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:36.873025 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:36.878398 systemd-logind[1559]: New session 46 of user core. May 15 12:17:36.889031 systemd[1]: Started session-46.scope - Session 46 of User core. May 15 12:17:37.008319 sshd[6010]: Connection closed by 10.0.0.1 port 41492 May 15 12:17:37.008704 sshd-session[6008]: pam_unix(sshd:session): session closed for user core May 15 12:17:37.013003 systemd[1]: sshd@45-10.0.0.15:22-10.0.0.1:41492.service: Deactivated successfully. May 15 12:17:37.015305 systemd[1]: session-46.scope: Deactivated successfully. May 15 12:17:37.016270 systemd-logind[1559]: Session 46 logged out. Waiting for processes to exit. May 15 12:17:37.017740 systemd-logind[1559]: Removed session 46. May 15 12:17:42.031563 systemd[1]: Started sshd@46-10.0.0.15:22-10.0.0.1:41502.service - OpenSSH per-connection server daemon (10.0.0.1:41502). May 15 12:17:42.080242 sshd[6031]: Accepted publickey for core from 10.0.0.1 port 41502 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:42.081991 sshd-session[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:42.087320 systemd-logind[1559]: New session 47 of user core. May 15 12:17:42.094836 systemd[1]: Started session-47.scope - Session 47 of User core. May 15 12:17:42.223604 sshd[6033]: Connection closed by 10.0.0.1 port 41502 May 15 12:17:42.223990 sshd-session[6031]: pam_unix(sshd:session): session closed for user core May 15 12:17:42.228888 systemd[1]: sshd@46-10.0.0.15:22-10.0.0.1:41502.service: Deactivated successfully. May 15 12:17:42.231571 systemd[1]: session-47.scope: Deactivated successfully. May 15 12:17:42.232688 systemd-logind[1559]: Session 47 logged out. Waiting for processes to exit. May 15 12:17:42.234738 systemd-logind[1559]: Removed session 47. May 15 12:17:47.240933 systemd[1]: Started sshd@47-10.0.0.15:22-10.0.0.1:37006.service - OpenSSH per-connection server daemon (10.0.0.1:37006). May 15 12:17:47.287164 sshd[6057]: Accepted publickey for core from 10.0.0.1 port 37006 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:47.288721 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:47.293035 systemd-logind[1559]: New session 48 of user core. May 15 12:17:47.302787 systemd[1]: Started session-48.scope - Session 48 of User core. May 15 12:17:47.423254 sshd[6059]: Connection closed by 10.0.0.1 port 37006 May 15 12:17:47.423827 sshd-session[6057]: pam_unix(sshd:session): session closed for user core May 15 12:17:47.428558 systemd[1]: sshd@47-10.0.0.15:22-10.0.0.1:37006.service: Deactivated successfully. May 15 12:17:47.431396 systemd[1]: session-48.scope: Deactivated successfully. May 15 12:17:47.432737 systemd-logind[1559]: Session 48 logged out. Waiting for processes to exit. May 15 12:17:47.435383 systemd-logind[1559]: Removed session 48. May 15 12:17:50.371543 containerd[1575]: time="2025-05-15T12:17:50.371491202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"8ae155fd16ef66416d4d238b21186bfcef254c9703eb23ee1d96358c7f6ab3ad\" pid:6086 exited_at:{seconds:1747311470 nanos:371298705}" May 15 12:17:52.448861 systemd[1]: Started sshd@48-10.0.0.15:22-10.0.0.1:37022.service - OpenSSH per-connection server daemon (10.0.0.1:37022). May 15 12:17:52.501022 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 37022 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:52.502624 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:52.507269 systemd-logind[1559]: New session 49 of user core. May 15 12:17:52.513240 systemd[1]: Started session-49.scope - Session 49 of User core. May 15 12:17:52.635896 sshd[6099]: Connection closed by 10.0.0.1 port 37022 May 15 12:17:52.636310 sshd-session[6097]: pam_unix(sshd:session): session closed for user core May 15 12:17:52.657690 systemd[1]: sshd@48-10.0.0.15:22-10.0.0.1:37022.service: Deactivated successfully. May 15 12:17:52.659689 systemd[1]: session-49.scope: Deactivated successfully. May 15 12:17:52.660976 systemd-logind[1559]: Session 49 logged out. Waiting for processes to exit. May 15 12:17:52.665452 systemd[1]: Started sshd@49-10.0.0.15:22-10.0.0.1:37034.service - OpenSSH per-connection server daemon (10.0.0.1:37034). May 15 12:17:52.666337 systemd-logind[1559]: Removed session 49. May 15 12:17:52.719467 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 37034 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:52.721200 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:52.729411 systemd-logind[1559]: New session 50 of user core. May 15 12:17:52.743992 systemd[1]: Started session-50.scope - Session 50 of User core. May 15 12:17:53.465076 sshd[6114]: Connection closed by 10.0.0.1 port 37034 May 15 12:17:53.465753 sshd-session[6112]: pam_unix(sshd:session): session closed for user core May 15 12:17:53.474449 systemd[1]: sshd@49-10.0.0.15:22-10.0.0.1:37034.service: Deactivated successfully. May 15 12:17:53.476594 systemd[1]: session-50.scope: Deactivated successfully. May 15 12:17:53.477443 systemd-logind[1559]: Session 50 logged out. Waiting for processes to exit. May 15 12:17:53.480524 systemd[1]: Started sshd@50-10.0.0.15:22-10.0.0.1:37040.service - OpenSSH per-connection server daemon (10.0.0.1:37040). May 15 12:17:53.481472 systemd-logind[1559]: Removed session 50. May 15 12:17:53.536260 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 37040 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:53.538149 sshd-session[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:53.543727 systemd-logind[1559]: New session 51 of user core. May 15 12:17:53.553801 systemd[1]: Started session-51.scope - Session 51 of User core. May 15 12:17:54.621196 sshd[6127]: Connection closed by 10.0.0.1 port 37040 May 15 12:17:54.622271 sshd-session[6125]: pam_unix(sshd:session): session closed for user core May 15 12:17:54.632405 systemd[1]: sshd@50-10.0.0.15:22-10.0.0.1:37040.service: Deactivated successfully. May 15 12:17:54.635584 systemd[1]: session-51.scope: Deactivated successfully. May 15 12:17:54.636808 systemd-logind[1559]: Session 51 logged out. Waiting for processes to exit. May 15 12:17:54.640914 systemd[1]: Started sshd@51-10.0.0.15:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). May 15 12:17:54.643038 systemd-logind[1559]: Removed session 51. May 15 12:17:54.696454 sshd[6145]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:54.698353 sshd-session[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:54.703985 systemd-logind[1559]: New session 52 of user core. May 15 12:17:54.717929 systemd[1]: Started session-52.scope - Session 52 of User core. May 15 12:17:54.970332 sshd[6148]: Connection closed by 10.0.0.1 port 45122 May 15 12:17:54.971035 sshd-session[6145]: pam_unix(sshd:session): session closed for user core May 15 12:17:54.982179 systemd[1]: sshd@51-10.0.0.15:22-10.0.0.1:45122.service: Deactivated successfully. May 15 12:17:54.985025 systemd[1]: session-52.scope: Deactivated successfully. May 15 12:17:54.986285 systemd-logind[1559]: Session 52 logged out. Waiting for processes to exit. May 15 12:17:54.990540 systemd[1]: Started sshd@52-10.0.0.15:22-10.0.0.1:45132.service - OpenSSH per-connection server daemon (10.0.0.1:45132). May 15 12:17:54.991406 systemd-logind[1559]: Removed session 52. May 15 12:17:55.043111 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 45132 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:17:55.045319 sshd-session[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:17:55.051366 systemd-logind[1559]: New session 53 of user core. May 15 12:17:55.062916 systemd[1]: Started session-53.scope - Session 53 of User core. May 15 12:17:55.225412 sshd[6162]: Connection closed by 10.0.0.1 port 45132 May 15 12:17:55.225689 sshd-session[6160]: pam_unix(sshd:session): session closed for user core May 15 12:17:55.229302 systemd[1]: sshd@52-10.0.0.15:22-10.0.0.1:45132.service: Deactivated successfully. May 15 12:17:55.231880 systemd[1]: session-53.scope: Deactivated successfully. May 15 12:17:55.235061 systemd-logind[1559]: Session 53 logged out. Waiting for processes to exit. May 15 12:17:55.236218 systemd-logind[1559]: Removed session 53. May 15 12:18:00.243115 systemd[1]: Started sshd@53-10.0.0.15:22-10.0.0.1:45138.service - OpenSSH per-connection server daemon (10.0.0.1:45138). May 15 12:18:00.297192 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 45138 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:00.299085 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:00.304043 systemd-logind[1559]: New session 54 of user core. May 15 12:18:00.311853 systemd[1]: Started session-54.scope - Session 54 of User core. May 15 12:18:00.426625 sshd[6177]: Connection closed by 10.0.0.1 port 45138 May 15 12:18:00.426959 sshd-session[6175]: pam_unix(sshd:session): session closed for user core May 15 12:18:00.431432 systemd[1]: sshd@53-10.0.0.15:22-10.0.0.1:45138.service: Deactivated successfully. May 15 12:18:00.434195 systemd[1]: session-54.scope: Deactivated successfully. May 15 12:18:00.435181 systemd-logind[1559]: Session 54 logged out. Waiting for processes to exit. May 15 12:18:00.437216 systemd-logind[1559]: Removed session 54. May 15 12:18:01.667835 kubelet[2713]: E0515 12:18:01.667795 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:03.335578 containerd[1575]: time="2025-05-15T12:18:03.335531448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"2860da5556bfac04805bf52f5ae9a206b9b7b87dfc2efe6afbd08160a875cfa6\" pid:6202 exited_at:{seconds:1747311483 nanos:335164510}" May 15 12:18:05.443966 systemd[1]: Started sshd@54-10.0.0.15:22-10.0.0.1:54438.service - OpenSSH per-connection server daemon (10.0.0.1:54438). May 15 12:18:05.499513 sshd[6215]: Accepted publickey for core from 10.0.0.1 port 54438 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:05.500926 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:05.505093 systemd-logind[1559]: New session 55 of user core. May 15 12:18:05.520824 systemd[1]: Started session-55.scope - Session 55 of User core. May 15 12:18:05.638877 sshd[6217]: Connection closed by 10.0.0.1 port 54438 May 15 12:18:05.639246 sshd-session[6215]: pam_unix(sshd:session): session closed for user core May 15 12:18:05.644139 systemd[1]: sshd@54-10.0.0.15:22-10.0.0.1:54438.service: Deactivated successfully. May 15 12:18:05.646787 systemd[1]: session-55.scope: Deactivated successfully. May 15 12:18:05.647594 systemd-logind[1559]: Session 55 logged out. Waiting for processes to exit. May 15 12:18:05.649362 systemd-logind[1559]: Removed session 55. May 15 12:18:06.275598 containerd[1575]: time="2025-05-15T12:18:06.275550311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"35c1b91bfc402a702716fde5bf588c52e18db8f204bd90e9e9d17e4a57bc98dc\" pid:6243 exited_at:{seconds:1747311486 nanos:275214233}" May 15 12:18:10.656938 systemd[1]: Started sshd@55-10.0.0.15:22-10.0.0.1:54452.service - OpenSSH per-connection server daemon (10.0.0.1:54452). May 15 12:18:10.706463 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 54452 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:10.708194 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:10.712670 systemd-logind[1559]: New session 56 of user core. May 15 12:18:10.722848 systemd[1]: Started session-56.scope - Session 56 of User core. May 15 12:18:10.848521 sshd[6256]: Connection closed by 10.0.0.1 port 54452 May 15 12:18:10.848889 sshd-session[6254]: pam_unix(sshd:session): session closed for user core May 15 12:18:10.853430 systemd[1]: sshd@55-10.0.0.15:22-10.0.0.1:54452.service: Deactivated successfully. May 15 12:18:10.855634 systemd[1]: session-56.scope: Deactivated successfully. May 15 12:18:10.856685 systemd-logind[1559]: Session 56 logged out. Waiting for processes to exit. May 15 12:18:10.858008 systemd-logind[1559]: Removed session 56. May 15 12:18:11.669015 kubelet[2713]: E0515 12:18:11.668962 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:15.666236 kubelet[2713]: E0515 12:18:15.666174 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:15.860734 systemd[1]: Started sshd@56-10.0.0.15:22-10.0.0.1:37372.service - OpenSSH per-connection server daemon (10.0.0.1:37372). May 15 12:18:15.907865 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 37372 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:15.909407 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:15.913861 systemd-logind[1559]: New session 57 of user core. May 15 12:18:15.923772 systemd[1]: Started session-57.scope - Session 57 of User core. May 15 12:18:16.046691 sshd[6271]: Connection closed by 10.0.0.1 port 37372 May 15 12:18:16.047033 sshd-session[6269]: pam_unix(sshd:session): session closed for user core May 15 12:18:16.051216 systemd[1]: sshd@56-10.0.0.15:22-10.0.0.1:37372.service: Deactivated successfully. May 15 12:18:16.053259 systemd[1]: session-57.scope: Deactivated successfully. May 15 12:18:16.054101 systemd-logind[1559]: Session 57 logged out. Waiting for processes to exit. May 15 12:18:16.055304 systemd-logind[1559]: Removed session 57. May 15 12:18:20.371792 containerd[1575]: time="2025-05-15T12:18:20.371730829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"bd3becbd266ec6f6fb50074d155a0069b9f4c5890deb86373b723cd18936d75e\" pid:6298 exited_at:{seconds:1747311500 nanos:371117245}" May 15 12:18:21.061978 systemd[1]: Started sshd@57-10.0.0.15:22-10.0.0.1:37388.service - OpenSSH per-connection server daemon (10.0.0.1:37388). May 15 12:18:21.105442 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 37388 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:21.107056 sshd-session[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:21.112244 systemd-logind[1559]: New session 58 of user core. May 15 12:18:21.126155 systemd[1]: Started session-58.scope - Session 58 of User core. May 15 12:18:21.240537 sshd[6311]: Connection closed by 10.0.0.1 port 37388 May 15 12:18:21.240908 sshd-session[6309]: pam_unix(sshd:session): session closed for user core May 15 12:18:21.245839 systemd[1]: sshd@57-10.0.0.15:22-10.0.0.1:37388.service: Deactivated successfully. May 15 12:18:21.248357 systemd[1]: session-58.scope: Deactivated successfully. May 15 12:18:21.249253 systemd-logind[1559]: Session 58 logged out. Waiting for processes to exit. May 15 12:18:21.250525 systemd-logind[1559]: Removed session 58. May 15 12:18:26.255276 systemd[1]: Started sshd@58-10.0.0.15:22-10.0.0.1:52158.service - OpenSSH per-connection server daemon (10.0.0.1:52158). May 15 12:18:26.301755 sshd[6325]: Accepted publickey for core from 10.0.0.1 port 52158 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:26.303223 sshd-session[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:26.307595 systemd-logind[1559]: New session 59 of user core. May 15 12:18:26.316781 systemd[1]: Started session-59.scope - Session 59 of User core. May 15 12:18:26.441019 sshd[6327]: Connection closed by 10.0.0.1 port 52158 May 15 12:18:26.441322 sshd-session[6325]: pam_unix(sshd:session): session closed for user core May 15 12:18:26.444954 systemd[1]: sshd@58-10.0.0.15:22-10.0.0.1:52158.service: Deactivated successfully. May 15 12:18:26.446768 systemd[1]: session-59.scope: Deactivated successfully. May 15 12:18:26.447606 systemd-logind[1559]: Session 59 logged out. Waiting for processes to exit. May 15 12:18:26.448971 systemd-logind[1559]: Removed session 59. May 15 12:18:27.666304 kubelet[2713]: E0515 12:18:27.666253 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:31.457214 systemd[1]: Started sshd@59-10.0.0.15:22-10.0.0.1:52166.service - OpenSSH per-connection server daemon (10.0.0.1:52166). May 15 12:18:31.506604 sshd[6340]: Accepted publickey for core from 10.0.0.1 port 52166 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:31.508477 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:31.514050 systemd-logind[1559]: New session 60 of user core. May 15 12:18:31.524800 systemd[1]: Started session-60.scope - Session 60 of User core. May 15 12:18:31.634860 sshd[6342]: Connection closed by 10.0.0.1 port 52166 May 15 12:18:31.635219 sshd-session[6340]: pam_unix(sshd:session): session closed for user core May 15 12:18:31.640284 systemd[1]: sshd@59-10.0.0.15:22-10.0.0.1:52166.service: Deactivated successfully. May 15 12:18:31.642715 systemd[1]: session-60.scope: Deactivated successfully. May 15 12:18:31.643506 systemd-logind[1559]: Session 60 logged out. Waiting for processes to exit. May 15 12:18:31.644968 systemd-logind[1559]: Removed session 60. May 15 12:18:33.322623 containerd[1575]: time="2025-05-15T12:18:33.322566532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"e55c3a4ee9b28a0305846619ffd9ea9fe87adc4a6454f50d725b19691d9e82fb\" pid:6366 exited_at:{seconds:1747311513 nanos:322280480}" May 15 12:18:36.207198 containerd[1575]: time="2025-05-15T12:18:36.187188255Z" level=warning msg="container event discarded" container=657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296 type=CONTAINER_CREATED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238435594Z" level=warning msg="container event discarded" container=657bb36d3041b5d0c7b48047e25295c0a0e043fdf5fd2572db331ad236d3b296 type=CONTAINER_STARTED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238492862Z" level=warning msg="container event discarded" container=5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f type=CONTAINER_CREATED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238509854Z" level=warning msg="container event discarded" container=5c324e277f46cfa1d6db497fb24c1e23bf7a07b48cd76e287f9ef51f9b210d8f type=CONTAINER_STARTED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238519122Z" level=warning msg="container event discarded" container=c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25 type=CONTAINER_CREATED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238527047Z" level=warning msg="container event discarded" container=c2ef4d99078b3c5b10b45f5cdfb3cb2f9351479caff2aee880c332acd6b62a25 type=CONTAINER_STARTED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238534762Z" level=warning msg="container event discarded" container=2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436 type=CONTAINER_CREATED_EVENT May 15 12:18:36.238524 containerd[1575]: time="2025-05-15T12:18:36.238542506Z" level=warning msg="container event discarded" container=5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e type=CONTAINER_CREATED_EVENT May 15 12:18:36.238871 containerd[1575]: time="2025-05-15T12:18:36.238549870Z" level=warning msg="container event discarded" container=ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4 type=CONTAINER_CREATED_EVENT May 15 12:18:36.351952 containerd[1575]: time="2025-05-15T12:18:36.351857410Z" level=warning msg="container event discarded" container=5d944f4a87136e040b839f2c8c8915b9f01a96a77b0c68ca6b508ac04eb0ba9e type=CONTAINER_STARTED_EVENT May 15 12:18:36.363242 containerd[1575]: time="2025-05-15T12:18:36.363153080Z" level=warning msg="container event discarded" container=2cbb93b84af290233dc7b8f4ce8acd0f26f9e32c08fc0d07f422bb8f652fe436 type=CONTAINER_STARTED_EVENT May 15 12:18:36.377466 containerd[1575]: time="2025-05-15T12:18:36.377401529Z" level=warning msg="container event discarded" container=ea7043dedf670d23a5f1a4bc0404cafebbce206635590d9d91e0fcb1c12206f4 type=CONTAINER_STARTED_EVENT May 15 12:18:36.653418 systemd[1]: Started sshd@60-10.0.0.15:22-10.0.0.1:38536.service - OpenSSH per-connection server daemon (10.0.0.1:38536). May 15 12:18:36.704225 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 38536 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:36.705879 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:36.711822 systemd-logind[1559]: New session 61 of user core. May 15 12:18:36.717782 systemd[1]: Started session-61.scope - Session 61 of User core. May 15 12:18:36.825706 sshd[6384]: Connection closed by 10.0.0.1 port 38536 May 15 12:18:36.826002 sshd-session[6379]: pam_unix(sshd:session): session closed for user core May 15 12:18:36.830605 systemd[1]: sshd@60-10.0.0.15:22-10.0.0.1:38536.service: Deactivated successfully. May 15 12:18:36.833334 systemd[1]: session-61.scope: Deactivated successfully. May 15 12:18:36.834074 systemd-logind[1559]: Session 61 logged out. Waiting for processes to exit. May 15 12:18:36.835636 systemd-logind[1559]: Removed session 61. May 15 12:18:41.851105 systemd[1]: Started sshd@61-10.0.0.15:22-10.0.0.1:38548.service - OpenSSH per-connection server daemon (10.0.0.1:38548). May 15 12:18:41.902025 sshd[6401]: Accepted publickey for core from 10.0.0.1 port 38548 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:41.903770 sshd-session[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:41.909459 systemd-logind[1559]: New session 62 of user core. May 15 12:18:41.923827 systemd[1]: Started session-62.scope - Session 62 of User core. May 15 12:18:42.031909 sshd[6403]: Connection closed by 10.0.0.1 port 38548 May 15 12:18:42.032222 sshd-session[6401]: pam_unix(sshd:session): session closed for user core May 15 12:18:42.036936 systemd[1]: sshd@61-10.0.0.15:22-10.0.0.1:38548.service: Deactivated successfully. May 15 12:18:42.039261 systemd[1]: session-62.scope: Deactivated successfully. May 15 12:18:42.040137 systemd-logind[1559]: Session 62 logged out. Waiting for processes to exit. May 15 12:18:42.041377 systemd-logind[1559]: Removed session 62. May 15 12:18:42.666349 kubelet[2713]: E0515 12:18:42.666310 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:43.666883 kubelet[2713]: E0515 12:18:43.666833 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:45.666441 kubelet[2713]: E0515 12:18:45.666355 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:47.051146 systemd[1]: Started sshd@62-10.0.0.15:22-10.0.0.1:34428.service - OpenSSH per-connection server daemon (10.0.0.1:34428). May 15 12:18:47.104177 sshd[6416]: Accepted publickey for core from 10.0.0.1 port 34428 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:47.105974 sshd-session[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:47.110760 systemd-logind[1559]: New session 63 of user core. May 15 12:18:47.120834 systemd[1]: Started session-63.scope - Session 63 of User core. May 15 12:18:47.231897 sshd[6418]: Connection closed by 10.0.0.1 port 34428 May 15 12:18:47.232274 sshd-session[6416]: pam_unix(sshd:session): session closed for user core May 15 12:18:47.237969 systemd[1]: sshd@62-10.0.0.15:22-10.0.0.1:34428.service: Deactivated successfully. May 15 12:18:47.240216 systemd[1]: session-63.scope: Deactivated successfully. May 15 12:18:47.241160 systemd-logind[1559]: Session 63 logged out. Waiting for processes to exit. May 15 12:18:47.242379 systemd-logind[1559]: Removed session 63. May 15 12:18:48.154943 containerd[1575]: time="2025-05-15T12:18:48.154811103Z" level=warning msg="container event discarded" container=dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6 type=CONTAINER_CREATED_EVENT May 15 12:18:48.154943 containerd[1575]: time="2025-05-15T12:18:48.154923506Z" level=warning msg="container event discarded" container=dc773cede889652d1f91276e21c28b9c9b16c575b5ea18063dbd3f19c1c073e6 type=CONTAINER_STARTED_EVENT May 15 12:18:48.172161 containerd[1575]: time="2025-05-15T12:18:48.172114640Z" level=warning msg="container event discarded" container=0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d type=CONTAINER_CREATED_EVENT May 15 12:18:48.172161 containerd[1575]: time="2025-05-15T12:18:48.172140249Z" level=warning msg="container event discarded" container=0eabbf51b0cc42dc244d98790afd12df2b6e2e64672c56efea6253a4a870fc2d type=CONTAINER_STARTED_EVENT May 15 12:18:48.211333 containerd[1575]: time="2025-05-15T12:18:48.203351079Z" level=warning msg="container event discarded" container=ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2 type=CONTAINER_CREATED_EVENT May 15 12:18:48.279637 containerd[1575]: time="2025-05-15T12:18:48.279546247Z" level=warning msg="container event discarded" container=ccda463bf299810197c26e95923ceb3b76a98ffffe67fbce53a33a14e2315ec2 type=CONTAINER_STARTED_EVENT May 15 12:18:50.369345 containerd[1575]: time="2025-05-15T12:18:50.369306703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"64d82205a0cc6dbaab63df2364449f25e0347e7f28e582bd4c925e8ed96c4637\" pid:6446 exited_at:{seconds:1747311530 nanos:369115140}" May 15 12:18:50.746866 containerd[1575]: time="2025-05-15T12:18:50.746618272Z" level=warning msg="container event discarded" container=033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37 type=CONTAINER_CREATED_EVENT May 15 12:18:50.856405 containerd[1575]: time="2025-05-15T12:18:50.856320879Z" level=warning msg="container event discarded" container=033c3fc4606fa25dd3b53311f6adc0539c0e2e393f427197e1a1834e2ab04d37 type=CONTAINER_STARTED_EVENT May 15 12:18:52.247847 systemd[1]: Started sshd@63-10.0.0.15:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). May 15 12:18:52.292469 sshd[6457]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:52.293941 sshd-session[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:52.298575 systemd-logind[1559]: New session 64 of user core. May 15 12:18:52.314807 systemd[1]: Started session-64.scope - Session 64 of User core. May 15 12:18:52.424515 sshd[6459]: Connection closed by 10.0.0.1 port 34444 May 15 12:18:52.424838 sshd-session[6457]: pam_unix(sshd:session): session closed for user core May 15 12:18:52.429795 systemd[1]: sshd@63-10.0.0.15:22-10.0.0.1:34444.service: Deactivated successfully. May 15 12:18:52.432574 systemd[1]: session-64.scope: Deactivated successfully. May 15 12:18:52.433548 systemd-logind[1559]: Session 64 logged out. Waiting for processes to exit. May 15 12:18:52.435060 systemd-logind[1559]: Removed session 64. May 15 12:18:54.288100 containerd[1575]: time="2025-05-15T12:18:54.288020857Z" level=warning msg="container event discarded" container=62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577 type=CONTAINER_CREATED_EVENT May 15 12:18:54.288100 containerd[1575]: time="2025-05-15T12:18:54.288079538Z" level=warning msg="container event discarded" container=62dd56a362da20e60f95d774d26deb0adafacb0657d3789c3569ab6663c8c577 type=CONTAINER_STARTED_EVENT May 15 12:18:54.367508 containerd[1575]: time="2025-05-15T12:18:54.367417011Z" level=warning msg="container event discarded" container=00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e type=CONTAINER_CREATED_EVENT May 15 12:18:54.367508 containerd[1575]: time="2025-05-15T12:18:54.367478387Z" level=warning msg="container event discarded" container=00cc8376a733fdada980921b577deeb34a6346bb8df27cd8a0f8a0b9abca3e6e type=CONTAINER_STARTED_EVENT May 15 12:18:54.666215 kubelet[2713]: E0515 12:18:54.666145 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:18:57.441337 systemd[1]: Started sshd@64-10.0.0.15:22-10.0.0.1:41174.service - OpenSSH per-connection server daemon (10.0.0.1:41174). May 15 12:18:57.488456 sshd[6473]: Accepted publickey for core from 10.0.0.1 port 41174 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:18:57.490162 sshd-session[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:18:57.495415 systemd-logind[1559]: New session 65 of user core. May 15 12:18:57.502807 systemd[1]: Started session-65.scope - Session 65 of User core. May 15 12:18:57.617971 sshd[6475]: Connection closed by 10.0.0.1 port 41174 May 15 12:18:57.618356 sshd-session[6473]: pam_unix(sshd:session): session closed for user core May 15 12:18:57.623295 systemd[1]: sshd@64-10.0.0.15:22-10.0.0.1:41174.service: Deactivated successfully. May 15 12:18:57.626055 systemd[1]: session-65.scope: Deactivated successfully. May 15 12:18:57.627234 systemd-logind[1559]: Session 65 logged out. Waiting for processes to exit. May 15 12:18:57.628592 systemd-logind[1559]: Removed session 65. May 15 12:19:00.513033 containerd[1575]: time="2025-05-15T12:19:00.512911126Z" level=warning msg="container event discarded" container=86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938 type=CONTAINER_CREATED_EVENT May 15 12:19:00.800605 containerd[1575]: time="2025-05-15T12:19:00.800399107Z" level=warning msg="container event discarded" container=86154a55ff52f029e4b045008225106c724af1097be0576aad21f5b86f070938 type=CONTAINER_STARTED_EVENT May 15 12:19:02.634764 systemd[1]: Started sshd@65-10.0.0.15:22-10.0.0.1:41190.service - OpenSSH per-connection server daemon (10.0.0.1:41190). May 15 12:19:02.695093 sshd[6488]: Accepted publickey for core from 10.0.0.1 port 41190 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:02.697547 sshd-session[6488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:02.703932 systemd-logind[1559]: New session 66 of user core. May 15 12:19:02.710839 systemd[1]: Started session-66.scope - Session 66 of User core. May 15 12:19:02.850551 sshd[6490]: Connection closed by 10.0.0.1 port 41190 May 15 12:19:02.851078 sshd-session[6488]: pam_unix(sshd:session): session closed for user core May 15 12:19:02.860464 systemd[1]: sshd@65-10.0.0.15:22-10.0.0.1:41190.service: Deactivated successfully. May 15 12:19:02.862770 systemd[1]: session-66.scope: Deactivated successfully. May 15 12:19:02.863839 systemd-logind[1559]: Session 66 logged out. Waiting for processes to exit. May 15 12:19:02.865910 systemd-logind[1559]: Removed session 66. May 15 12:19:03.346444 containerd[1575]: time="2025-05-15T12:19:03.346379777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"88fb070990559e75934263989d980691e2e371eb1820dbc4dc528ee0404857ee\" pid:6514 exited_at:{seconds:1747311543 nanos:345956425}" May 15 12:19:06.268456 containerd[1575]: time="2025-05-15T12:19:06.268353550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"19b42269455f4821ddfd1a87a4cf135a6983353a835242e246e99d55f056d4ed\" pid:6538 exited_at:{seconds:1747311546 nanos:268126741}" May 15 12:19:07.865126 systemd[1]: Started sshd@66-10.0.0.15:22-10.0.0.1:51972.service - OpenSSH per-connection server daemon (10.0.0.1:51972). May 15 12:19:07.908142 sshd[6548]: Accepted publickey for core from 10.0.0.1 port 51972 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:07.909524 sshd-session[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:07.914685 systemd-logind[1559]: New session 67 of user core. May 15 12:19:07.924844 systemd[1]: Started session-67.scope - Session 67 of User core. May 15 12:19:07.931503 containerd[1575]: time="2025-05-15T12:19:07.931411833Z" level=warning msg="container event discarded" container=ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb type=CONTAINER_CREATED_EVENT May 15 12:19:08.033671 sshd[6550]: Connection closed by 10.0.0.1 port 51972 May 15 12:19:08.034004 sshd-session[6548]: pam_unix(sshd:session): session closed for user core May 15 12:19:08.038134 systemd[1]: sshd@66-10.0.0.15:22-10.0.0.1:51972.service: Deactivated successfully. May 15 12:19:08.040311 systemd[1]: session-67.scope: Deactivated successfully. May 15 12:19:08.041147 systemd-logind[1559]: Session 67 logged out. Waiting for processes to exit. May 15 12:19:08.042862 systemd-logind[1559]: Removed session 67. May 15 12:19:08.074876 containerd[1575]: time="2025-05-15T12:19:08.074784105Z" level=warning msg="container event discarded" container=ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb type=CONTAINER_STARTED_EVENT May 15 12:19:10.106391 containerd[1575]: time="2025-05-15T12:19:10.106319000Z" level=warning msg="container event discarded" container=ef0452d446fc994febc4c3dc0f79b55907cd36f9d533fbfad8bded141b02b7eb type=CONTAINER_STOPPED_EVENT May 15 12:19:13.046976 systemd[1]: Started sshd@67-10.0.0.15:22-10.0.0.1:51984.service - OpenSSH per-connection server daemon (10.0.0.1:51984). May 15 12:19:13.098910 sshd[6569]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:13.100605 sshd-session[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:13.105973 systemd-logind[1559]: New session 68 of user core. May 15 12:19:13.116886 systemd[1]: Started session-68.scope - Session 68 of User core. May 15 12:19:13.238157 sshd[6571]: Connection closed by 10.0.0.1 port 51984 May 15 12:19:13.238511 sshd-session[6569]: pam_unix(sshd:session): session closed for user core May 15 12:19:13.242678 systemd[1]: sshd@67-10.0.0.15:22-10.0.0.1:51984.service: Deactivated successfully. May 15 12:19:13.244693 systemd[1]: session-68.scope: Deactivated successfully. May 15 12:19:13.245554 systemd-logind[1559]: Session 68 logged out. Waiting for processes to exit. May 15 12:19:13.247370 systemd-logind[1559]: Removed session 68. May 15 12:19:15.927380 containerd[1575]: time="2025-05-15T12:19:15.927306515Z" level=warning msg="container event discarded" container=7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5 type=CONTAINER_CREATED_EVENT May 15 12:19:16.010785 containerd[1575]: time="2025-05-15T12:19:16.010636637Z" level=warning msg="container event discarded" container=7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5 type=CONTAINER_STARTED_EVENT May 15 12:19:18.264888 systemd[1]: Started sshd@68-10.0.0.15:22-10.0.0.1:36520.service - OpenSSH per-connection server daemon (10.0.0.1:36520). May 15 12:19:18.319544 sshd[6584]: Accepted publickey for core from 10.0.0.1 port 36520 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:18.321529 sshd-session[6584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:18.326079 systemd-logind[1559]: New session 69 of user core. May 15 12:19:18.334791 systemd[1]: Started session-69.scope - Session 69 of User core. May 15 12:19:18.463532 sshd[6586]: Connection closed by 10.0.0.1 port 36520 May 15 12:19:18.463897 sshd-session[6584]: pam_unix(sshd:session): session closed for user core May 15 12:19:18.468255 systemd[1]: sshd@68-10.0.0.15:22-10.0.0.1:36520.service: Deactivated successfully. May 15 12:19:18.470715 systemd[1]: session-69.scope: Deactivated successfully. May 15 12:19:18.471752 systemd-logind[1559]: Session 69 logged out. Waiting for processes to exit. May 15 12:19:18.473406 systemd-logind[1559]: Removed session 69. May 15 12:19:18.666414 kubelet[2713]: E0515 12:19:18.666377 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:19:18.865934 containerd[1575]: time="2025-05-15T12:19:18.865820577Z" level=warning msg="container event discarded" container=7bb5dbe7aa8d3bf8950db793282cf9eae62cb0bd1cca823fdfbc31d9a1840aa5 type=CONTAINER_STOPPED_EVENT May 15 12:19:20.377497 containerd[1575]: time="2025-05-15T12:19:20.377377031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"820c2f0e6b28979bb8f22aa0f9864b11c60bd4af51acf243aa61bf54729df96f\" pid:6613 exited_at:{seconds:1747311560 nanos:377027780}" May 15 12:19:23.477974 systemd[1]: Started sshd@69-10.0.0.15:22-10.0.0.1:36524.service - OpenSSH per-connection server daemon (10.0.0.1:36524). May 15 12:19:23.526887 sshd[6635]: Accepted publickey for core from 10.0.0.1 port 36524 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:23.528577 sshd-session[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:23.533712 systemd-logind[1559]: New session 70 of user core. May 15 12:19:23.543885 systemd[1]: Started session-70.scope - Session 70 of User core. May 15 12:19:23.661375 sshd[6637]: Connection closed by 10.0.0.1 port 36524 May 15 12:19:23.661737 sshd-session[6635]: pam_unix(sshd:session): session closed for user core May 15 12:19:23.667427 systemd[1]: sshd@69-10.0.0.15:22-10.0.0.1:36524.service: Deactivated successfully. May 15 12:19:23.669685 systemd[1]: session-70.scope: Deactivated successfully. May 15 12:19:23.670528 systemd-logind[1559]: Session 70 logged out. Waiting for processes to exit. May 15 12:19:23.671894 systemd-logind[1559]: Removed session 70. May 15 12:19:28.678555 systemd[1]: Started sshd@70-10.0.0.15:22-10.0.0.1:45686.service - OpenSSH per-connection server daemon (10.0.0.1:45686). May 15 12:19:28.719225 sshd[6651]: Accepted publickey for core from 10.0.0.1 port 45686 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:28.721186 sshd-session[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:28.726010 systemd-logind[1559]: New session 71 of user core. May 15 12:19:28.735805 systemd[1]: Started session-71.scope - Session 71 of User core. May 15 12:19:28.846060 sshd[6653]: Connection closed by 10.0.0.1 port 45686 May 15 12:19:28.846377 sshd-session[6651]: pam_unix(sshd:session): session closed for user core May 15 12:19:28.850931 systemd[1]: sshd@70-10.0.0.15:22-10.0.0.1:45686.service: Deactivated successfully. May 15 12:19:28.852916 systemd[1]: session-71.scope: Deactivated successfully. May 15 12:19:28.853834 systemd-logind[1559]: Session 71 logged out. Waiting for processes to exit. May 15 12:19:28.855160 systemd-logind[1559]: Removed session 71. May 15 12:19:32.104367 containerd[1575]: time="2025-05-15T12:19:32.104304613Z" level=warning msg="container event discarded" container=612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa type=CONTAINER_CREATED_EVENT May 15 12:19:32.223724 containerd[1575]: time="2025-05-15T12:19:32.223624913Z" level=warning msg="container event discarded" container=612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa type=CONTAINER_STARTED_EVENT May 15 12:19:33.332609 containerd[1575]: time="2025-05-15T12:19:33.332564327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"49a70fb6d2b48345b06fc1a60ff1392ba33b792ccc11716b00016afef8849f47\" pid:6676 exited_at:{seconds:1747311573 nanos:332251235}" May 15 12:19:33.860282 systemd[1]: Started sshd@71-10.0.0.15:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). May 15 12:19:33.907318 sshd[6689]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:33.908800 sshd-session[6689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:33.913650 systemd-logind[1559]: New session 72 of user core. May 15 12:19:33.924417 systemd[1]: Started session-72.scope - Session 72 of User core. May 15 12:19:34.029600 sshd[6691]: Connection closed by 10.0.0.1 port 54366 May 15 12:19:34.029907 sshd-session[6689]: pam_unix(sshd:session): session closed for user core May 15 12:19:34.033225 systemd[1]: sshd@71-10.0.0.15:22-10.0.0.1:54366.service: Deactivated successfully. May 15 12:19:34.035389 systemd[1]: session-72.scope: Deactivated successfully. May 15 12:19:34.036877 systemd-logind[1559]: Session 72 logged out. Waiting for processes to exit. May 15 12:19:34.038541 systemd-logind[1559]: Removed session 72. May 15 12:19:34.755902 containerd[1575]: time="2025-05-15T12:19:34.755790147Z" level=warning msg="container event discarded" container=4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56 type=CONTAINER_CREATED_EVENT May 15 12:19:34.755902 containerd[1575]: time="2025-05-15T12:19:34.755877012Z" level=warning msg="container event discarded" container=4c84e53579d67241aa411718482bda0b09251ea34d693a0334150ede4744fb56 type=CONTAINER_STARTED_EVENT May 15 12:19:34.963624 containerd[1575]: time="2025-05-15T12:19:34.963518947Z" level=warning msg="container event discarded" container=b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8 type=CONTAINER_CREATED_EVENT May 15 12:19:34.963624 containerd[1575]: time="2025-05-15T12:19:34.963589661Z" level=warning msg="container event discarded" container=b78d9d44d26078c66a103351a8824d78493b07859d40f72866d1596a2b9ebfb8 type=CONTAINER_STARTED_EVENT May 15 12:19:35.059055 containerd[1575]: time="2025-05-15T12:19:35.058879356Z" level=warning msg="container event discarded" container=713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb type=CONTAINER_CREATED_EVENT May 15 12:19:35.059055 containerd[1575]: time="2025-05-15T12:19:35.058945561Z" level=warning msg="container event discarded" container=713050cb9cb9e921bc44ee66daed46c23017497c72bbeba8516df1edae2116bb type=CONTAINER_STARTED_EVENT May 15 12:19:37.666506 kubelet[2713]: E0515 12:19:37.666440 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:19:39.043171 systemd[1]: Started sshd@72-10.0.0.15:22-10.0.0.1:54378.service - OpenSSH per-connection server daemon (10.0.0.1:54378). May 15 12:19:39.090163 sshd[6704]: Accepted publickey for core from 10.0.0.1 port 54378 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:39.091770 sshd-session[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:39.096491 systemd-logind[1559]: New session 73 of user core. May 15 12:19:39.107788 systemd[1]: Started session-73.scope - Session 73 of User core. May 15 12:19:39.224599 sshd[6706]: Connection closed by 10.0.0.1 port 54378 May 15 12:19:39.225007 sshd-session[6704]: pam_unix(sshd:session): session closed for user core May 15 12:19:39.229778 systemd[1]: sshd@72-10.0.0.15:22-10.0.0.1:54378.service: Deactivated successfully. May 15 12:19:39.231858 systemd[1]: session-73.scope: Deactivated successfully. May 15 12:19:39.232674 systemd-logind[1559]: Session 73 logged out. Waiting for processes to exit. May 15 12:19:39.233986 systemd-logind[1559]: Removed session 73. May 15 12:19:39.669487 kubelet[2713]: E0515 12:19:39.669437 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:19:41.427403 containerd[1575]: time="2025-05-15T12:19:41.427300093Z" level=warning msg="container event discarded" container=dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2 type=CONTAINER_CREATED_EVENT May 15 12:19:41.508808 containerd[1575]: time="2025-05-15T12:19:41.508719668Z" level=warning msg="container event discarded" container=dd6bcd149e6dfa58677b1e467331343d8adfa5337742df7a4f58e708e2c3e7b2 type=CONTAINER_STARTED_EVENT May 15 12:19:42.104576 containerd[1575]: time="2025-05-15T12:19:42.104493383Z" level=warning msg="container event discarded" container=de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc type=CONTAINER_CREATED_EVENT May 15 12:19:42.192868 containerd[1575]: time="2025-05-15T12:19:42.192776004Z" level=warning msg="container event discarded" container=de073db04cf8a5e2449bfb9ad52948dce982382ab83edb90439606217c7bfebc type=CONTAINER_STARTED_EVENT May 15 12:19:43.090300 containerd[1575]: time="2025-05-15T12:19:43.090233308Z" level=warning msg="container event discarded" container=15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19 type=CONTAINER_CREATED_EVENT May 15 12:19:43.090300 containerd[1575]: time="2025-05-15T12:19:43.090289364Z" level=warning msg="container event discarded" container=15cdf75435f4c16901efb744ad74d0fa1c8999e88d3b0e4f0c0a22e745a65b19 type=CONTAINER_STARTED_EVENT May 15 12:19:44.241339 systemd[1]: Started sshd@73-10.0.0.15:22-10.0.0.1:46354.service - OpenSSH per-connection server daemon (10.0.0.1:46354). May 15 12:19:44.290761 sshd[6721]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:44.292138 sshd-session[6721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:44.296467 systemd-logind[1559]: New session 74 of user core. May 15 12:19:44.305765 systemd[1]: Started session-74.scope - Session 74 of User core. May 15 12:19:44.418158 sshd[6723]: Connection closed by 10.0.0.1 port 46354 May 15 12:19:44.418511 sshd-session[6721]: pam_unix(sshd:session): session closed for user core May 15 12:19:44.423759 systemd[1]: sshd@73-10.0.0.15:22-10.0.0.1:46354.service: Deactivated successfully. May 15 12:19:44.426353 systemd[1]: session-74.scope: Deactivated successfully. May 15 12:19:44.427573 systemd-logind[1559]: Session 74 logged out. Waiting for processes to exit. May 15 12:19:44.429880 systemd-logind[1559]: Removed session 74. May 15 12:19:45.070549 containerd[1575]: time="2025-05-15T12:19:45.070475023Z" level=warning msg="container event discarded" container=fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584 type=CONTAINER_CREATED_EVENT May 15 12:19:45.070549 containerd[1575]: time="2025-05-15T12:19:45.070525469Z" level=warning msg="container event discarded" container=fd1c048ac4822e1c87a29dd6f8849894f4bb1e9737f78880cfb91fe8aa927584 type=CONTAINER_STARTED_EVENT May 15 12:19:45.100867 containerd[1575]: time="2025-05-15T12:19:45.100773203Z" level=warning msg="container event discarded" container=8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f type=CONTAINER_CREATED_EVENT May 15 12:19:45.161163 containerd[1575]: time="2025-05-15T12:19:45.161068100Z" level=warning msg="container event discarded" container=8dca8f29934804127df714115d8ed9bed9c73c51aa21d50beb9ea6ddc34f676f type=CONTAINER_STARTED_EVENT May 15 12:19:47.605838 containerd[1575]: time="2025-05-15T12:19:47.605713281Z" level=warning msg="container event discarded" container=169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b type=CONTAINER_CREATED_EVENT May 15 12:19:47.605838 containerd[1575]: time="2025-05-15T12:19:47.605821094Z" level=warning msg="container event discarded" container=169b072451604d8650b55e4eb404d95c5c3a2a0beec41184fdd2f33548e6b04b type=CONTAINER_STARTED_EVENT May 15 12:19:47.651211 containerd[1575]: time="2025-05-15T12:19:47.651108422Z" level=warning msg="container event discarded" container=c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f type=CONTAINER_CREATED_EVENT May 15 12:19:48.118380 update_engine[1562]: I20250515 12:19:48.118268 1562 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 12:19:48.118380 update_engine[1562]: I20250515 12:19:48.118366 1562 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 12:19:48.119997 update_engine[1562]: I20250515 12:19:48.119956 1562 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 12:19:48.120675 update_engine[1562]: I20250515 12:19:48.120594 1562 omaha_request_params.cc:62] Current group set to developer May 15 12:19:48.120857 update_engine[1562]: I20250515 12:19:48.120820 1562 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 12:19:48.120857 update_engine[1562]: I20250515 12:19:48.120839 1562 update_attempter.cc:643] Scheduling an action processor start. May 15 12:19:48.120972 update_engine[1562]: I20250515 12:19:48.120863 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 12:19:48.120972 update_engine[1562]: I20250515 12:19:48.120921 1562 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 12:19:48.121026 update_engine[1562]: I20250515 12:19:48.120998 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 12:19:48.121026 update_engine[1562]: I20250515 12:19:48.121011 1562 omaha_request_action.cc:272] Request: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: May 15 12:19:48.121026 update_engine[1562]: I20250515 12:19:48.121019 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:19:48.121599 containerd[1575]: time="2025-05-15T12:19:48.121519108Z" level=warning msg="container event discarded" container=c0022649e6eb124ecb97693f90d6d0ab2205407c64e3f841359d6c6e32bff45f type=CONTAINER_STARTED_EVENT May 15 12:19:48.130868 update_engine[1562]: I20250515 12:19:48.130795 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:19:48.131412 update_engine[1562]: I20250515 12:19:48.131305 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:19:48.132144 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 12:19:48.139691 update_engine[1562]: E20250515 12:19:48.139593 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:19:48.139841 update_engine[1562]: I20250515 12:19:48.139729 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 12:19:48.651822 containerd[1575]: time="2025-05-15T12:19:48.651718593Z" level=warning msg="container event discarded" container=39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5 type=CONTAINER_CREATED_EVENT May 15 12:19:48.666427 kubelet[2713]: E0515 12:19:48.666382 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:19:48.746230 containerd[1575]: time="2025-05-15T12:19:48.746122795Z" level=warning msg="container event discarded" container=39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5 type=CONTAINER_STARTED_EVENT May 15 12:19:49.442159 systemd[1]: Started sshd@74-10.0.0.15:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). May 15 12:19:49.490932 sshd[6738]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:49.492606 sshd-session[6738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:49.496998 systemd-logind[1559]: New session 75 of user core. May 15 12:19:49.507784 systemd[1]: Started session-75.scope - Session 75 of User core. May 15 12:19:49.632745 sshd[6740]: Connection closed by 10.0.0.1 port 46360 May 15 12:19:49.633129 sshd-session[6738]: pam_unix(sshd:session): session closed for user core May 15 12:19:49.638163 systemd[1]: sshd@74-10.0.0.15:22-10.0.0.1:46360.service: Deactivated successfully. May 15 12:19:49.640454 systemd[1]: session-75.scope: Deactivated successfully. May 15 12:19:49.641275 systemd-logind[1559]: Session 75 logged out. Waiting for processes to exit. May 15 12:19:49.642691 systemd-logind[1559]: Removed session 75. May 15 12:19:50.368403 containerd[1575]: time="2025-05-15T12:19:50.368342395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"ac7daccf51cb702aa7f3c6d5b685d11973819c03e8d610d67628b2975d36902f\" pid:6764 exited_at:{seconds:1747311590 nanos:368092302}" May 15 12:19:51.666390 kubelet[2713]: E0515 12:19:51.666321 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:19:51.938742 containerd[1575]: time="2025-05-15T12:19:51.938564160Z" level=warning msg="container event discarded" container=6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8 type=CONTAINER_CREATED_EVENT May 15 12:19:52.211579 containerd[1575]: time="2025-05-15T12:19:52.211433335Z" level=warning msg="container event discarded" container=6a036681449e4f876814156c4fcb0f99f6668abd3b0796a8a32a320ac5857da8 type=CONTAINER_STARTED_EVENT May 15 12:19:54.646047 systemd[1]: Started sshd@75-10.0.0.15:22-10.0.0.1:60074.service - OpenSSH per-connection server daemon (10.0.0.1:60074). May 15 12:19:54.807837 sshd[6781]: Accepted publickey for core from 10.0.0.1 port 60074 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:19:54.809372 sshd-session[6781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:19:54.814065 systemd-logind[1559]: New session 76 of user core. May 15 12:19:54.821849 systemd[1]: Started session-76.scope - Session 76 of User core. May 15 12:19:55.003285 sshd[6783]: Connection closed by 10.0.0.1 port 60074 May 15 12:19:55.003668 sshd-session[6781]: pam_unix(sshd:session): session closed for user core May 15 12:19:55.009082 systemd[1]: sshd@75-10.0.0.15:22-10.0.0.1:60074.service: Deactivated successfully. May 15 12:19:55.011254 systemd[1]: session-76.scope: Deactivated successfully. May 15 12:19:55.012161 systemd-logind[1559]: Session 76 logged out. Waiting for processes to exit. May 15 12:19:55.013747 systemd-logind[1559]: Removed session 76. May 15 12:19:55.916699 containerd[1575]: time="2025-05-15T12:19:55.916602873Z" level=warning msg="container event discarded" container=37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c type=CONTAINER_CREATED_EVENT May 15 12:19:55.988739 containerd[1575]: time="2025-05-15T12:19:55.988682225Z" level=warning msg="container event discarded" container=37b3076e2fe683eee947c35ceaf3db04a7f16bcd4a23ed975b248b04853e5b5c type=CONTAINER_STARTED_EVENT May 15 12:19:58.091841 update_engine[1562]: I20250515 12:19:58.091774 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:19:58.092236 update_engine[1562]: I20250515 12:19:58.092010 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:19:58.092277 update_engine[1562]: I20250515 12:19:58.092252 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:19:58.098204 update_engine[1562]: E20250515 12:19:58.098174 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:19:58.098257 update_engine[1562]: I20250515 12:19:58.098217 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 12:20:00.021268 systemd[1]: Started sshd@76-10.0.0.15:22-10.0.0.1:60080.service - OpenSSH per-connection server daemon (10.0.0.1:60080). May 15 12:20:00.089006 sshd[6796]: Accepted publickey for core from 10.0.0.1 port 60080 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:00.090703 sshd-session[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:00.095597 systemd-logind[1559]: New session 77 of user core. May 15 12:20:00.103789 systemd[1]: Started session-77.scope - Session 77 of User core. May 15 12:20:00.225156 sshd[6798]: Connection closed by 10.0.0.1 port 60080 May 15 12:20:00.225477 sshd-session[6796]: pam_unix(sshd:session): session closed for user core May 15 12:20:00.229141 systemd[1]: sshd@76-10.0.0.15:22-10.0.0.1:60080.service: Deactivated successfully. May 15 12:20:00.231199 systemd[1]: session-77.scope: Deactivated successfully. May 15 12:20:00.232091 systemd-logind[1559]: Session 77 logged out. Waiting for processes to exit. May 15 12:20:00.233375 systemd-logind[1559]: Removed session 77. May 15 12:20:03.331072 containerd[1575]: time="2025-05-15T12:20:03.330980981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"338ddb1cab25cfd25b2c603c4bcd4ec9193e2e50b23781e41b4f3a532f13e37e\" pid:6822 exited_at:{seconds:1747311603 nanos:330672898}" May 15 12:20:04.666764 kubelet[2713]: E0515 12:20:04.666712 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:04.667254 kubelet[2713]: E0515 12:20:04.666910 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:05.250613 systemd[1]: Started sshd@77-10.0.0.15:22-10.0.0.1:46908.service - OpenSSH per-connection server daemon (10.0.0.1:46908). May 15 12:20:05.301630 sshd[6835]: Accepted publickey for core from 10.0.0.1 port 46908 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:05.303253 sshd-session[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:05.308049 systemd-logind[1559]: New session 78 of user core. May 15 12:20:05.321798 systemd[1]: Started session-78.scope - Session 78 of User core. May 15 12:20:05.441059 sshd[6837]: Connection closed by 10.0.0.1 port 46908 May 15 12:20:05.441456 sshd-session[6835]: pam_unix(sshd:session): session closed for user core May 15 12:20:05.446743 systemd[1]: sshd@77-10.0.0.15:22-10.0.0.1:46908.service: Deactivated successfully. May 15 12:20:05.448962 systemd[1]: session-78.scope: Deactivated successfully. May 15 12:20:05.450044 systemd-logind[1559]: Session 78 logged out. Waiting for processes to exit. May 15 12:20:05.451317 systemd-logind[1559]: Removed session 78. May 15 12:20:06.267385 containerd[1575]: time="2025-05-15T12:20:06.267329968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"c322a0dce23101331c82cdfcbc9d8347867381f50847697a86baf82bb3c660d8\" pid:6861 exited_at:{seconds:1747311606 nanos:267179423}" May 15 12:20:06.666400 kubelet[2713]: E0515 12:20:06.666365 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:08.090765 update_engine[1562]: I20250515 12:20:08.090683 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:20:08.091260 update_engine[1562]: I20250515 12:20:08.090983 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:20:08.091308 update_engine[1562]: I20250515 12:20:08.091284 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:20:08.115238 update_engine[1562]: E20250515 12:20:08.115154 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:20:08.115238 update_engine[1562]: I20250515 12:20:08.115244 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 12:20:10.455423 systemd[1]: Started sshd@78-10.0.0.15:22-10.0.0.1:46920.service - OpenSSH per-connection server daemon (10.0.0.1:46920). May 15 12:20:10.515661 sshd[6874]: Accepted publickey for core from 10.0.0.1 port 46920 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:10.517541 sshd-session[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:10.522488 systemd-logind[1559]: New session 79 of user core. May 15 12:20:10.532843 systemd[1]: Started session-79.scope - Session 79 of User core. May 15 12:20:10.655085 sshd[6876]: Connection closed by 10.0.0.1 port 46920 May 15 12:20:10.655457 sshd-session[6874]: pam_unix(sshd:session): session closed for user core May 15 12:20:10.659163 systemd[1]: sshd@78-10.0.0.15:22-10.0.0.1:46920.service: Deactivated successfully. May 15 12:20:10.662017 systemd[1]: session-79.scope: Deactivated successfully. May 15 12:20:10.665905 systemd-logind[1559]: Session 79 logged out. Waiting for processes to exit. May 15 12:20:10.667698 systemd-logind[1559]: Removed session 79. May 15 12:20:15.671611 systemd[1]: Started sshd@79-10.0.0.15:22-10.0.0.1:42514.service - OpenSSH per-connection server daemon (10.0.0.1:42514). May 15 12:20:15.726300 sshd[6890]: Accepted publickey for core from 10.0.0.1 port 42514 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:15.727908 sshd-session[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:15.731927 systemd-logind[1559]: New session 80 of user core. May 15 12:20:15.740813 systemd[1]: Started session-80.scope - Session 80 of User core. May 15 12:20:15.845337 sshd[6892]: Connection closed by 10.0.0.1 port 42514 May 15 12:20:15.845622 sshd-session[6890]: pam_unix(sshd:session): session closed for user core May 15 12:20:15.850009 systemd[1]: sshd@79-10.0.0.15:22-10.0.0.1:42514.service: Deactivated successfully. May 15 12:20:15.852148 systemd[1]: session-80.scope: Deactivated successfully. May 15 12:20:15.853157 systemd-logind[1559]: Session 80 logged out. Waiting for processes to exit. May 15 12:20:15.854624 systemd-logind[1559]: Removed session 80. May 15 12:20:18.090848 update_engine[1562]: I20250515 12:20:18.090771 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:20:18.091233 update_engine[1562]: I20250515 12:20:18.091039 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:20:18.091265 update_engine[1562]: I20250515 12:20:18.091243 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:20:18.100889 update_engine[1562]: E20250515 12:20:18.100858 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:20:18.100941 update_engine[1562]: I20250515 12:20:18.100899 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 12:20:18.100941 update_engine[1562]: I20250515 12:20:18.100907 1562 omaha_request_action.cc:617] Omaha request response: May 15 12:20:18.101001 update_engine[1562]: E20250515 12:20:18.100989 1562 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 12:20:18.101028 update_engine[1562]: I20250515 12:20:18.101014 1562 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 12:20:18.101028 update_engine[1562]: I20250515 12:20:18.101020 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 12:20:18.101067 update_engine[1562]: I20250515 12:20:18.101027 1562 update_attempter.cc:306] Processing Done. May 15 12:20:18.101067 update_engine[1562]: E20250515 12:20:18.101039 1562 update_attempter.cc:619] Update failed. May 15 12:20:18.101067 update_engine[1562]: I20250515 12:20:18.101049 1562 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 12:20:18.101067 update_engine[1562]: I20250515 12:20:18.101055 1562 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 12:20:18.101067 update_engine[1562]: I20250515 12:20:18.101061 1562 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 12:20:18.101199 update_engine[1562]: I20250515 12:20:18.101123 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 12:20:18.101199 update_engine[1562]: I20250515 12:20:18.101142 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 12:20:18.101199 update_engine[1562]: I20250515 12:20:18.101148 1562 omaha_request_action.cc:272] Request: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: May 15 12:20:18.101199 update_engine[1562]: I20250515 12:20:18.101155 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:20:18.101464 update_engine[1562]: I20250515 12:20:18.101284 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:20:18.101464 update_engine[1562]: I20250515 12:20:18.101454 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:20:18.101524 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 12:20:18.138093 update_engine[1562]: E20250515 12:20:18.138022 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138113 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138123 1562 omaha_request_action.cc:617] Omaha request response: May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138132 1562 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138139 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138145 1562 update_attempter.cc:306] Processing Done. May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138154 1562 update_attempter.cc:310] Error event sent. May 15 12:20:18.138221 update_engine[1562]: I20250515 12:20:18.138165 1562 update_check_scheduler.cc:74] Next update check in 43m2s May 15 12:20:18.138709 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 12:20:20.370568 containerd[1575]: time="2025-05-15T12:20:20.370494833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"e2c6a524f7113f23aaab6cc416de431c8b076b9a30ce771c5babdff38d9c0cd1\" pid:6920 exited_at:{seconds:1747311620 nanos:370213601}" May 15 12:20:20.858485 systemd[1]: Started sshd@80-10.0.0.15:22-10.0.0.1:42528.service - OpenSSH per-connection server daemon (10.0.0.1:42528). May 15 12:20:20.909392 sshd[6931]: Accepted publickey for core from 10.0.0.1 port 42528 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:20.911010 sshd-session[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:20.915533 systemd-logind[1559]: New session 81 of user core. May 15 12:20:20.930810 systemd[1]: Started session-81.scope - Session 81 of User core. May 15 12:20:21.048714 sshd[6933]: Connection closed by 10.0.0.1 port 42528 May 15 12:20:21.049050 sshd-session[6931]: pam_unix(sshd:session): session closed for user core May 15 12:20:21.053596 systemd[1]: sshd@80-10.0.0.15:22-10.0.0.1:42528.service: Deactivated successfully. May 15 12:20:21.055972 systemd[1]: session-81.scope: Deactivated successfully. May 15 12:20:21.057142 systemd-logind[1559]: Session 81 logged out. Waiting for processes to exit. May 15 12:20:21.059118 systemd-logind[1559]: Removed session 81. May 15 12:20:26.062735 systemd[1]: Started sshd@81-10.0.0.15:22-10.0.0.1:52636.service - OpenSSH per-connection server daemon (10.0.0.1:52636). May 15 12:20:26.108001 sshd[6947]: Accepted publickey for core from 10.0.0.1 port 52636 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:26.109466 sshd-session[6947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:26.114180 systemd-logind[1559]: New session 82 of user core. May 15 12:20:26.127927 systemd[1]: Started session-82.scope - Session 82 of User core. May 15 12:20:26.240442 sshd[6949]: Connection closed by 10.0.0.1 port 52636 May 15 12:20:26.240820 sshd-session[6947]: pam_unix(sshd:session): session closed for user core May 15 12:20:26.245209 systemd[1]: sshd@81-10.0.0.15:22-10.0.0.1:52636.service: Deactivated successfully. May 15 12:20:26.247748 systemd[1]: session-82.scope: Deactivated successfully. May 15 12:20:26.250053 systemd-logind[1559]: Session 82 logged out. Waiting for processes to exit. May 15 12:20:26.251817 systemd-logind[1559]: Removed session 82. May 15 12:20:27.666709 kubelet[2713]: E0515 12:20:27.666623 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:31.259075 systemd[1]: Started sshd@82-10.0.0.15:22-10.0.0.1:52646.service - OpenSSH per-connection server daemon (10.0.0.1:52646). May 15 12:20:31.311516 sshd[6962]: Accepted publickey for core from 10.0.0.1 port 52646 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:31.313162 sshd-session[6962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:31.317726 systemd-logind[1559]: New session 83 of user core. May 15 12:20:31.323798 systemd[1]: Started session-83.scope - Session 83 of User core. May 15 12:20:31.449542 sshd[6964]: Connection closed by 10.0.0.1 port 52646 May 15 12:20:31.449875 sshd-session[6962]: pam_unix(sshd:session): session closed for user core May 15 12:20:31.453950 systemd[1]: sshd@82-10.0.0.15:22-10.0.0.1:52646.service: Deactivated successfully. May 15 12:20:31.456060 systemd[1]: session-83.scope: Deactivated successfully. May 15 12:20:31.456898 systemd-logind[1559]: Session 83 logged out. Waiting for processes to exit. May 15 12:20:31.458459 systemd-logind[1559]: Removed session 83. May 15 12:20:33.352819 containerd[1575]: time="2025-05-15T12:20:33.352764656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"612b9543b2acc1012967cce3f81f3ef41d17b0f1e17987d35147d20b37f357aa\" id:\"3d5dbfd0dc8ba45ab554a32943605af82ac9504634ffff79d0b68236e676893f\" pid:6987 exited_at:{seconds:1747311633 nanos:352317862}" May 15 12:20:36.464040 systemd[1]: Started sshd@83-10.0.0.15:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). May 15 12:20:36.522363 sshd[7001]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:36.524269 sshd-session[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:36.530590 systemd-logind[1559]: New session 84 of user core. May 15 12:20:36.537798 systemd[1]: Started session-84.scope - Session 84 of User core. May 15 12:20:36.672285 sshd[7003]: Connection closed by 10.0.0.1 port 44794 May 15 12:20:36.672687 sshd-session[7001]: pam_unix(sshd:session): session closed for user core May 15 12:20:36.678632 systemd[1]: sshd@83-10.0.0.15:22-10.0.0.1:44794.service: Deactivated successfully. May 15 12:20:36.681919 systemd[1]: session-84.scope: Deactivated successfully. May 15 12:20:36.683042 systemd-logind[1559]: Session 84 logged out. Waiting for processes to exit. May 15 12:20:36.685850 systemd-logind[1559]: Removed session 84. May 15 12:20:41.666413 kubelet[2713]: E0515 12:20:41.666347 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:41.694824 systemd[1]: Started sshd@84-10.0.0.15:22-10.0.0.1:44802.service - OpenSSH per-connection server daemon (10.0.0.1:44802). May 15 12:20:41.732462 sshd[7019]: Accepted publickey for core from 10.0.0.1 port 44802 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:41.733983 sshd-session[7019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:41.738520 systemd-logind[1559]: New session 85 of user core. May 15 12:20:41.749777 systemd[1]: Started session-85.scope - Session 85 of User core. May 15 12:20:41.864364 sshd[7021]: Connection closed by 10.0.0.1 port 44802 May 15 12:20:41.864727 sshd-session[7019]: pam_unix(sshd:session): session closed for user core May 15 12:20:41.868079 systemd[1]: sshd@84-10.0.0.15:22-10.0.0.1:44802.service: Deactivated successfully. May 15 12:20:41.870182 systemd[1]: session-85.scope: Deactivated successfully. May 15 12:20:41.872256 systemd-logind[1559]: Session 85 logged out. Waiting for processes to exit. May 15 12:20:41.873877 systemd-logind[1559]: Removed session 85. May 15 12:20:46.880042 systemd[1]: Started sshd@85-10.0.0.15:22-10.0.0.1:35104.service - OpenSSH per-connection server daemon (10.0.0.1:35104). May 15 12:20:46.936909 sshd[7040]: Accepted publickey for core from 10.0.0.1 port 35104 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:46.938737 sshd-session[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:46.943498 systemd-logind[1559]: New session 86 of user core. May 15 12:20:46.950825 systemd[1]: Started session-86.scope - Session 86 of User core. May 15 12:20:47.056580 sshd[7042]: Connection closed by 10.0.0.1 port 35104 May 15 12:20:47.056921 sshd-session[7040]: pam_unix(sshd:session): session closed for user core May 15 12:20:47.061251 systemd[1]: sshd@85-10.0.0.15:22-10.0.0.1:35104.service: Deactivated successfully. May 15 12:20:47.063175 systemd[1]: session-86.scope: Deactivated successfully. May 15 12:20:47.064084 systemd-logind[1559]: Session 86 logged out. Waiting for processes to exit. May 15 12:20:47.065368 systemd-logind[1559]: Removed session 86. May 15 12:20:50.376420 containerd[1575]: time="2025-05-15T12:20:50.376364045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39ddb70f3a7c331893345db29f40a06e3296bc3ae19f6e9e5231143552688dd5\" id:\"9ece6ed46214524054446c57ae59769b7057e187455dfa24ffdf51678bb3395f\" pid:7067 exited_at:{seconds:1747311650 nanos:376099926}" May 15 12:20:52.076923 systemd[1]: Started sshd@86-10.0.0.15:22-10.0.0.1:35114.service - OpenSSH per-connection server daemon (10.0.0.1:35114). May 15 12:20:52.130587 sshd[7089]: Accepted publickey for core from 10.0.0.1 port 35114 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:52.132852 sshd-session[7089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:52.138384 systemd-logind[1559]: New session 87 of user core. May 15 12:20:52.149872 systemd[1]: Started session-87.scope - Session 87 of User core. May 15 12:20:52.259979 sshd[7091]: Connection closed by 10.0.0.1 port 35114 May 15 12:20:52.260326 sshd-session[7089]: pam_unix(sshd:session): session closed for user core May 15 12:20:52.264898 systemd[1]: sshd@86-10.0.0.15:22-10.0.0.1:35114.service: Deactivated successfully. May 15 12:20:52.267181 systemd[1]: session-87.scope: Deactivated successfully. May 15 12:20:52.268250 systemd-logind[1559]: Session 87 logged out. Waiting for processes to exit. May 15 12:20:52.269689 systemd-logind[1559]: Removed session 87. May 15 12:20:53.666176 kubelet[2713]: E0515 12:20:53.666122 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 12:20:57.278423 systemd[1]: Started sshd@87-10.0.0.15:22-10.0.0.1:49052.service - OpenSSH per-connection server daemon (10.0.0.1:49052). May 15 12:20:57.332677 sshd[7105]: Accepted publickey for core from 10.0.0.1 port 49052 ssh2: RSA SHA256:PzvkHi2yPlEZU64C+6iShM/DNXKhqlgfV3fjiP6jttI May 15 12:20:57.334327 sshd-session[7105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:20:57.338902 systemd-logind[1559]: New session 88 of user core. May 15 12:20:57.347807 systemd[1]: Started session-88.scope - Session 88 of User core. May 15 12:20:57.468153 sshd[7107]: Connection closed by 10.0.0.1 port 49052 May 15 12:20:57.468498 sshd-session[7105]: pam_unix(sshd:session): session closed for user core May 15 12:20:57.472376 systemd[1]: sshd@87-10.0.0.15:22-10.0.0.1:49052.service: Deactivated successfully. May 15 12:20:57.474503 systemd[1]: session-88.scope: Deactivated successfully. May 15 12:20:57.477043 systemd-logind[1559]: Session 88 logged out. Waiting for processes to exit. May 15 12:20:57.478335 systemd-logind[1559]: Removed session 88.