May 14 18:05:07.941856 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:05:07.941893 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:07.941903 kernel: BIOS-provided physical RAM map: May 14 18:05:07.941911 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 18:05:07.941917 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 18:05:07.941924 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 18:05:07.941932 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 14 18:05:07.941947 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 14 18:05:07.941957 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:05:07.941964 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 18:05:07.942009 kernel: NX (Execute Disable) protection: active May 14 18:05:07.942020 kernel: APIC: Static calls initialized May 14 18:05:07.942030 kernel: SMBIOS 2.8 present. May 14 18:05:07.942038 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 14 18:05:07.942050 kernel: DMI: Memory slots populated: 1/1 May 14 18:05:07.942058 kernel: Hypervisor detected: KVM May 14 18:05:07.942071 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:05:07.942083 kernel: kvm-clock: using sched offset of 5702161970 cycles May 14 18:05:07.942093 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:05:07.942101 kernel: tsc: Detected 2494.140 MHz processor May 14 18:05:07.942110 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:05:07.942118 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:05:07.942127 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 14 18:05:07.942138 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 18:05:07.942147 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:05:07.942159 kernel: ACPI: Early table checksum verification disabled May 14 18:05:07.942170 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 14 18:05:07.942178 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942187 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942195 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942207 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 14 18:05:07.942216 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942227 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942237 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942250 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:05:07.942262 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 14 18:05:07.942273 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 14 18:05:07.942284 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 14 18:05:07.942296 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 14 18:05:07.942308 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 14 18:05:07.942329 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 14 18:05:07.942338 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 14 18:05:07.942346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 14 18:05:07.942355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 14 18:05:07.942364 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] May 14 18:05:07.942375 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] May 14 18:05:07.942384 kernel: Zone ranges: May 14 18:05:07.942393 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:05:07.942401 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 14 18:05:07.942413 kernel: Normal empty May 14 18:05:07.942429 kernel: Device empty May 14 18:05:07.942441 kernel: Movable zone start for each node May 14 18:05:07.942472 kernel: Early memory node ranges May 14 18:05:07.942491 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 18:05:07.942504 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 14 18:05:07.942529 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 14 18:05:07.942539 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:05:07.942554 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 18:05:07.942566 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 14 18:05:07.942579 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:05:07.942590 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:05:07.942608 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:05:07.942621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:05:07.942638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:05:07.942657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:05:07.942673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:05:07.942685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:05:07.942697 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:05:07.942709 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:05:07.942722 kernel: TSC deadline timer available May 14 18:05:07.942735 kernel: CPU topo: Max. logical packages: 1 May 14 18:05:07.942748 kernel: CPU topo: Max. logical dies: 1 May 14 18:05:07.942760 kernel: CPU topo: Max. dies per package: 1 May 14 18:05:07.942778 kernel: CPU topo: Max. threads per core: 1 May 14 18:05:07.942791 kernel: CPU topo: Num. cores per package: 2 May 14 18:05:07.942807 kernel: CPU topo: Num. threads per package: 2 May 14 18:05:07.942822 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 18:05:07.942838 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:05:07.942853 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 14 18:05:07.942866 kernel: Booting paravirtualized kernel on KVM May 14 18:05:07.942881 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:05:07.942890 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 18:05:07.942903 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 18:05:07.942912 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 18:05:07.942920 kernel: pcpu-alloc: [0] 0 1 May 14 18:05:07.942929 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 18:05:07.942939 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:07.942948 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:05:07.942957 kernel: random: crng init done May 14 18:05:07.942965 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:05:07.942990 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 18:05:07.943001 kernel: Fallback order for Node 0: 0 May 14 18:05:07.943023 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 May 14 18:05:07.943032 kernel: Policy zone: DMA32 May 14 18:05:07.943040 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:05:07.943050 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 18:05:07.943058 kernel: Kernel/User page tables isolation: enabled May 14 18:05:07.943068 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:05:07.943076 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:05:07.943085 kernel: Dynamic Preempt: voluntary May 14 18:05:07.943097 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:05:07.943108 kernel: rcu: RCU event tracing is enabled. May 14 18:05:07.943117 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 18:05:07.943126 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:05:07.943135 kernel: Rude variant of Tasks RCU enabled. May 14 18:05:07.943144 kernel: Tracing variant of Tasks RCU enabled. May 14 18:05:07.943153 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:05:07.943162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 18:05:07.943173 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:05:07.943194 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:05:07.943203 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:05:07.943212 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 18:05:07.943221 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:05:07.943230 kernel: Console: colour VGA+ 80x25 May 14 18:05:07.943256 kernel: printk: legacy console [tty0] enabled May 14 18:05:07.943265 kernel: printk: legacy console [ttyS0] enabled May 14 18:05:07.943274 kernel: ACPI: Core revision 20240827 May 14 18:05:07.943283 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:05:07.943304 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:05:07.943314 kernel: x2apic enabled May 14 18:05:07.943326 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:05:07.943335 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:05:07.943347 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 14 18:05:07.943363 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 14 18:05:07.943377 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 14 18:05:07.943390 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 14 18:05:07.943404 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:05:07.943440 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:05:07.943450 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:05:07.943459 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:05:07.943469 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 14 18:05:07.943478 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:05:07.943488 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:05:07.943497 kernel: MDS: Mitigation: Clear CPU buffers May 14 18:05:07.943511 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 14 18:05:07.943521 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:05:07.943530 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:05:07.943540 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:05:07.943549 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:05:07.943559 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 14 18:05:07.943569 kernel: Freeing SMP alternatives memory: 32K May 14 18:05:07.943579 kernel: pid_max: default: 32768 minimum: 301 May 14 18:05:07.943589 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:05:07.943602 kernel: landlock: Up and running. May 14 18:05:07.943612 kernel: SELinux: Initializing. May 14 18:05:07.943621 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:05:07.943631 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 18:05:07.945429 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 14 18:05:07.945469 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 14 18:05:07.945480 kernel: signal: max sigframe size: 1776 May 14 18:05:07.945490 kernel: rcu: Hierarchical SRCU implementation. May 14 18:05:07.945502 kernel: rcu: Max phase no-delay instances is 400. May 14 18:05:07.945520 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:05:07.945533 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 14 18:05:07.945547 kernel: smp: Bringing up secondary CPUs ... May 14 18:05:07.945560 kernel: smpboot: x86: Booting SMP configuration: May 14 18:05:07.945582 kernel: .... node #0, CPUs: #1 May 14 18:05:07.945597 kernel: smp: Brought up 1 node, 2 CPUs May 14 18:05:07.945612 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 14 18:05:07.945623 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 125140K reserved, 0K cma-reserved) May 14 18:05:07.945635 kernel: devtmpfs: initialized May 14 18:05:07.945650 kernel: x86/mm: Memory block size: 128MB May 14 18:05:07.945660 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:05:07.945670 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 18:05:07.945680 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:05:07.945690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:05:07.945705 kernel: audit: initializing netlink subsys (disabled) May 14 18:05:07.945719 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:05:07.945731 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:05:07.945745 kernel: audit: type=2000 audit(1747245903.576:1): state=initialized audit_enabled=0 res=1 May 14 18:05:07.945762 kernel: cpuidle: using governor menu May 14 18:05:07.945775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:05:07.945789 kernel: dca service started, version 1.12.1 May 14 18:05:07.945803 kernel: PCI: Using configuration type 1 for base access May 14 18:05:07.945815 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:05:07.945825 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:05:07.945835 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:05:07.945844 kernel: ACPI: Added _OSI(Module Device) May 14 18:05:07.945854 kernel: ACPI: Added _OSI(Processor Device) May 14 18:05:07.945868 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:05:07.945884 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:05:07.945898 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:05:07.945911 kernel: ACPI: Interpreter enabled May 14 18:05:07.945925 kernel: ACPI: PM: (supports S0 S5) May 14 18:05:07.945939 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:05:07.945949 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:05:07.945959 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:05:07.945968 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 14 18:05:07.946004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:05:07.946268 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 14 18:05:07.946414 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 14 18:05:07.946528 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 14 18:05:07.946541 kernel: acpiphp: Slot [3] registered May 14 18:05:07.946552 kernel: acpiphp: Slot [4] registered May 14 18:05:07.946561 kernel: acpiphp: Slot [5] registered May 14 18:05:07.946577 kernel: acpiphp: Slot [6] registered May 14 18:05:07.946586 kernel: acpiphp: Slot [7] registered May 14 18:05:07.946596 kernel: acpiphp: Slot [8] registered May 14 18:05:07.946605 kernel: acpiphp: Slot [9] registered May 14 18:05:07.946615 kernel: acpiphp: Slot [10] registered May 14 18:05:07.946624 kernel: acpiphp: Slot [11] registered May 14 18:05:07.946634 kernel: acpiphp: Slot [12] registered May 14 18:05:07.946643 kernel: acpiphp: Slot [13] registered May 14 18:05:07.946652 kernel: acpiphp: Slot [14] registered May 14 18:05:07.946665 kernel: acpiphp: Slot [15] registered May 14 18:05:07.946674 kernel: acpiphp: Slot [16] registered May 14 18:05:07.946687 kernel: acpiphp: Slot [17] registered May 14 18:05:07.946702 kernel: acpiphp: Slot [18] registered May 14 18:05:07.946716 kernel: acpiphp: Slot [19] registered May 14 18:05:07.946728 kernel: acpiphp: Slot [20] registered May 14 18:05:07.946741 kernel: acpiphp: Slot [21] registered May 14 18:05:07.946755 kernel: acpiphp: Slot [22] registered May 14 18:05:07.946766 kernel: acpiphp: Slot [23] registered May 14 18:05:07.946775 kernel: acpiphp: Slot [24] registered May 14 18:05:07.946789 kernel: acpiphp: Slot [25] registered May 14 18:05:07.946798 kernel: acpiphp: Slot [26] registered May 14 18:05:07.946807 kernel: acpiphp: Slot [27] registered May 14 18:05:07.946816 kernel: acpiphp: Slot [28] registered May 14 18:05:07.946825 kernel: acpiphp: Slot [29] registered May 14 18:05:07.946835 kernel: acpiphp: Slot [30] registered May 14 18:05:07.946844 kernel: acpiphp: Slot [31] registered May 14 18:05:07.946854 kernel: PCI host bridge to bus 0000:00 May 14 18:05:07.950185 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:05:07.950411 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:05:07.950560 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:05:07.950679 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 14 18:05:07.950767 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 14 18:05:07.950863 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:05:07.953117 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 14 18:05:07.953309 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 14 18:05:07.953423 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 14 18:05:07.953552 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] May 14 18:05:07.953691 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 14 18:05:07.953827 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 14 18:05:07.953946 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 14 18:05:07.956216 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 14 18:05:07.956365 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint May 14 18:05:07.956476 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] May 14 18:05:07.956628 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 14 18:05:07.956725 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 14 18:05:07.958141 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 14 18:05:07.958296 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 14 18:05:07.958429 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 14 18:05:07.958527 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] May 14 18:05:07.958621 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] May 14 18:05:07.958716 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] May 14 18:05:07.958811 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:05:07.961110 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:05:07.961332 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] May 14 18:05:07.961443 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] May 14 18:05:07.961539 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] May 14 18:05:07.961689 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:05:07.961789 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] May 14 18:05:07.961907 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] May 14 18:05:07.962027 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] May 14 18:05:07.962138 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 14 18:05:07.962240 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] May 14 18:05:07.962332 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] May 14 18:05:07.962450 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] May 14 18:05:07.962585 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:05:07.962687 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] May 14 18:05:07.962829 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] May 14 18:05:07.964691 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] May 14 18:05:07.965033 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:05:07.965156 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] May 14 18:05:07.965264 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] May 14 18:05:07.965369 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] May 14 18:05:07.965510 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:05:07.965606 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] May 14 18:05:07.965715 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] May 14 18:05:07.965727 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:05:07.965737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:05:07.965746 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:05:07.965756 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:05:07.965765 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 14 18:05:07.965774 kernel: iommu: Default domain type: Translated May 14 18:05:07.965784 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:05:07.965799 kernel: PCI: Using ACPI for IRQ routing May 14 18:05:07.965809 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:05:07.965818 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 18:05:07.965827 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 14 18:05:07.965964 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 14 18:05:07.966080 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 14 18:05:07.966174 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:05:07.966186 kernel: vgaarb: loaded May 14 18:05:07.966196 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:05:07.966215 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:05:07.966224 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:05:07.966234 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:05:07.966243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:05:07.966253 kernel: pnp: PnP ACPI init May 14 18:05:07.966262 kernel: pnp: PnP ACPI: found 4 devices May 14 18:05:07.966272 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:05:07.966281 kernel: NET: Registered PF_INET protocol family May 14 18:05:07.966295 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:05:07.966305 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 18:05:07.966315 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:05:07.966325 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 18:05:07.966334 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 18:05:07.966344 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 18:05:07.966353 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:05:07.966362 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 18:05:07.966371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:05:07.966385 kernel: NET: Registered PF_XDP protocol family May 14 18:05:07.966480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:05:07.966564 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:05:07.966647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:05:07.966765 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 14 18:05:07.966890 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 14 18:05:07.967061 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 14 18:05:07.967246 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 18:05:07.967288 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 14 18:05:07.967443 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 33826 usecs May 14 18:05:07.967464 kernel: PCI: CLS 0 bytes, default 64 May 14 18:05:07.967478 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 14 18:05:07.967492 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 14 18:05:07.967505 kernel: Initialise system trusted keyrings May 14 18:05:07.967520 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 18:05:07.967533 kernel: Key type asymmetric registered May 14 18:05:07.967546 kernel: Asymmetric key parser 'x509' registered May 14 18:05:07.967573 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:05:07.967586 kernel: io scheduler mq-deadline registered May 14 18:05:07.967601 kernel: io scheduler kyber registered May 14 18:05:07.967614 kernel: io scheduler bfq registered May 14 18:05:07.967628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:05:07.967642 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 14 18:05:07.967655 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 14 18:05:07.967669 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 14 18:05:07.967684 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:05:07.967706 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:05:07.967720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:05:07.967735 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:05:07.967749 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:05:07.968004 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 18:05:07.968030 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 14 18:05:07.968167 kernel: rtc_cmos 00:03: registered as rtc0 May 14 18:05:07.968308 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T18:05:07 UTC (1747245907) May 14 18:05:07.968447 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 14 18:05:07.968466 kernel: intel_pstate: CPU model not supported May 14 18:05:07.968483 kernel: NET: Registered PF_INET6 protocol family May 14 18:05:07.968499 kernel: Segment Routing with IPv6 May 14 18:05:07.968515 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:05:07.968531 kernel: NET: Registered PF_PACKET protocol family May 14 18:05:07.968547 kernel: Key type dns_resolver registered May 14 18:05:07.968584 kernel: IPI shorthand broadcast: enabled May 14 18:05:07.968600 kernel: sched_clock: Marking stable (4060119949, 115316552)->(4246990191, -71553690) May 14 18:05:07.968624 kernel: registered taskstats version 1 May 14 18:05:07.968641 kernel: Loading compiled-in X.509 certificates May 14 18:05:07.968654 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:05:07.968668 kernel: Demotion targets for Node 0: null May 14 18:05:07.968683 kernel: Key type .fscrypt registered May 14 18:05:07.968699 kernel: Key type fscrypt-provisioning registered May 14 18:05:07.968849 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:05:07.968870 kernel: ima: Allocated hash algorithm: sha1 May 14 18:05:07.968906 kernel: ima: No architecture policies found May 14 18:05:07.968919 kernel: clk: Disabling unused clocks May 14 18:05:07.968934 kernel: Warning: unable to open an initial console. May 14 18:05:07.968952 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:05:07.968968 kernel: Write protecting the kernel read-only data: 24576k May 14 18:05:07.969000 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:05:07.969017 kernel: Run /init as init process May 14 18:05:07.969034 kernel: with arguments: May 14 18:05:07.969046 kernel: /init May 14 18:05:07.969064 kernel: with environment: May 14 18:05:07.969074 kernel: HOME=/ May 14 18:05:07.969084 kernel: TERM=linux May 14 18:05:07.969094 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:05:07.969107 systemd[1]: Successfully made /usr/ read-only. May 14 18:05:07.969123 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:05:07.969134 systemd[1]: Detected virtualization kvm. May 14 18:05:07.969144 systemd[1]: Detected architecture x86-64. May 14 18:05:07.969160 systemd[1]: Running in initrd. May 14 18:05:07.969170 systemd[1]: No hostname configured, using default hostname. May 14 18:05:07.969181 systemd[1]: Hostname set to . May 14 18:05:07.969191 systemd[1]: Initializing machine ID from VM UUID. May 14 18:05:07.969201 systemd[1]: Queued start job for default target initrd.target. May 14 18:05:07.969211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:05:07.969222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:05:07.969233 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:05:07.969249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:05:07.969259 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:05:07.969275 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:05:07.969287 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:05:07.969302 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:05:07.969312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:05:07.969323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:05:07.969336 systemd[1]: Reached target paths.target - Path Units. May 14 18:05:07.969351 systemd[1]: Reached target slices.target - Slice Units. May 14 18:05:07.969367 systemd[1]: Reached target swap.target - Swaps. May 14 18:05:07.969383 systemd[1]: Reached target timers.target - Timer Units. May 14 18:05:07.969397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:05:07.969414 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:05:07.969431 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:05:07.969448 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:05:07.969460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:05:07.969471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:05:07.969481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:05:07.969491 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:05:07.969503 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:05:07.969521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:05:07.969558 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:05:07.969569 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:05:07.969580 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:05:07.969596 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:05:07.969610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:05:07.969623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:07.969638 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:05:07.969664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:05:07.969681 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:05:07.969699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:05:07.969776 systemd-journald[210]: Collecting audit messages is disabled. May 14 18:05:07.969822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:05:07.969842 systemd-journald[210]: Journal started May 14 18:05:07.969879 systemd-journald[210]: Runtime Journal (/run/log/journal/a98f4b55294b4ade8f598053a0112a66) is 4.9M, max 39.5M, 34.6M free. May 14 18:05:07.920932 systemd-modules-load[212]: Inserted module 'overlay' May 14 18:05:08.010301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:05:08.010335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:05:08.010351 kernel: Bridge firewalling registered May 14 18:05:07.981212 systemd-modules-load[212]: Inserted module 'br_netfilter' May 14 18:05:08.013137 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:05:08.015355 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:05:08.017599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:08.024219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:05:08.027060 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:05:08.031805 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:05:08.043735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:05:08.055613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:05:08.056406 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:05:08.064009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:05:08.068188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:05:08.072060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:05:08.076226 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:05:08.101008 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:05:08.122116 systemd-resolved[246]: Positive Trust Anchors: May 14 18:05:08.122801 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:05:08.123317 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:05:08.128943 systemd-resolved[246]: Defaulting to hostname 'linux'. May 14 18:05:08.131246 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:05:08.132403 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:05:08.214022 kernel: SCSI subsystem initialized May 14 18:05:08.225019 kernel: Loading iSCSI transport class v2.0-870. May 14 18:05:08.238091 kernel: iscsi: registered transport (tcp) May 14 18:05:08.266007 kernel: iscsi: registered transport (qla4xxx) May 14 18:05:08.266096 kernel: QLogic iSCSI HBA Driver May 14 18:05:08.301471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:05:08.320252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:05:08.324564 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:05:08.401326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:05:08.404271 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:05:08.472080 kernel: raid6: avx2x4 gen() 12077 MB/s May 14 18:05:08.489081 kernel: raid6: avx2x2 gen() 14667 MB/s May 14 18:05:08.506156 kernel: raid6: avx2x1 gen() 12555 MB/s May 14 18:05:08.506305 kernel: raid6: using algorithm avx2x2 gen() 14667 MB/s May 14 18:05:08.524094 kernel: raid6: .... xor() 15082 MB/s, rmw enabled May 14 18:05:08.524207 kernel: raid6: using avx2x2 recovery algorithm May 14 18:05:08.551062 kernel: xor: automatically using best checksumming function avx May 14 18:05:08.761258 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:05:08.774912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:05:08.778597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:05:08.816273 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 14 18:05:08.826427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:05:08.830715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:05:08.867438 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 14 18:05:08.917818 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:05:08.921433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:05:08.999919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:05:09.004110 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:05:09.087074 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues May 14 18:05:09.102324 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 14 18:05:09.139230 kernel: scsi host0: Virtio SCSI HBA May 14 18:05:09.139429 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 14 18:05:09.139541 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:05:09.139555 kernel: GPT:9289727 != 125829119 May 14 18:05:09.139566 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:05:09.139578 kernel: GPT:9289727 != 125829119 May 14 18:05:09.139590 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:05:09.139602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:09.139624 kernel: libata version 3.00 loaded. May 14 18:05:09.139637 kernel: ata_piix 0000:00:01.1: version 2.13 May 14 18:05:09.194813 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 14 18:05:09.196306 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 14 18:05:09.196529 kernel: scsi host1: ata_piix May 14 18:05:09.196713 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:05:09.196735 kernel: scsi host2: ata_piix May 14 18:05:09.196953 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 May 14 18:05:09.197035 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 May 14 18:05:09.197055 kernel: AES CTR mode by8 optimization enabled May 14 18:05:09.197072 kernel: ACPI: bus type USB registered May 14 18:05:09.197089 kernel: usbcore: registered new interface driver usbfs May 14 18:05:09.197105 kernel: usbcore: registered new interface driver hub May 14 18:05:09.197121 kernel: usbcore: registered new device driver usb May 14 18:05:09.212837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:05:09.213079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:09.213720 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:09.216212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:09.217926 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:05:09.285858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:09.323022 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 18:05:09.386386 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 14 18:05:09.396007 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 14 18:05:09.396244 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 14 18:05:09.396411 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 14 18:05:09.396589 kernel: hub 1-0:1.0: USB hub found May 14 18:05:09.396780 kernel: hub 1-0:1.0: 2 ports detected May 14 18:05:09.413519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:05:09.425158 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:05:09.439195 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:05:09.452258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:05:09.464030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:05:09.464615 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:05:09.465878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:05:09.466662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:05:09.467451 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:05:09.469598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:05:09.471196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:05:09.497378 disk-uuid[618]: Primary Header is updated. May 14 18:05:09.497378 disk-uuid[618]: Secondary Entries is updated. May 14 18:05:09.497378 disk-uuid[618]: Secondary Header is updated. May 14 18:05:09.507037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:09.507379 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:05:10.522027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:05:10.523602 disk-uuid[621]: The operation has completed successfully. May 14 18:05:10.586522 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:05:10.587182 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:05:10.632294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:05:10.655452 sh[637]: Success May 14 18:05:10.678334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:05:10.678430 kernel: device-mapper: uevent: version 1.0.3 May 14 18:05:10.679691 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:05:10.694351 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 14 18:05:10.773143 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:05:10.778136 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:05:10.794942 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:05:10.812131 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:05:10.812242 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (649) May 14 18:05:10.816259 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:05:10.816347 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:10.818015 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:05:10.828553 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:05:10.830196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:05:10.831007 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:05:10.832064 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:05:10.834087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:05:10.871037 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (680) May 14 18:05:10.874802 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:10.874924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:10.874948 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:10.888055 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:10.890497 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:05:10.893253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:05:11.023046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:05:11.032243 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:05:11.092461 systemd-networkd[819]: lo: Link UP May 14 18:05:11.092478 systemd-networkd[819]: lo: Gained carrier May 14 18:05:11.096263 systemd-networkd[819]: Enumeration completed May 14 18:05:11.096502 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:05:11.096878 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:05:11.096885 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 14 18:05:11.097227 systemd[1]: Reached target network.target - Network. May 14 18:05:11.099547 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:05:11.099554 systemd-networkd[819]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:05:11.102548 systemd-networkd[819]: eth0: Link UP May 14 18:05:11.102555 systemd-networkd[819]: eth0: Gained carrier May 14 18:05:11.102579 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 14 18:05:11.111841 systemd-networkd[819]: eth1: Link UP May 14 18:05:11.111857 systemd-networkd[819]: eth1: Gained carrier May 14 18:05:11.111884 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:05:11.131128 systemd-networkd[819]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 May 14 18:05:11.139169 systemd-networkd[819]: eth0: DHCPv4 address 165.232.128.115/20, gateway 165.232.128.1 acquired from 169.254.169.253 May 14 18:05:11.143461 ignition[725]: Ignition 2.21.0 May 14 18:05:11.143475 ignition[725]: Stage: fetch-offline May 14 18:05:11.143511 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:11.143520 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:11.143677 ignition[725]: parsed url from cmdline: "" May 14 18:05:11.148167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:05:11.143683 ignition[725]: no config URL provided May 14 18:05:11.143691 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:05:11.143703 ignition[725]: no config at "/usr/lib/ignition/user.ign" May 14 18:05:11.151412 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 18:05:11.143711 ignition[725]: failed to fetch config: resource requires networking May 14 18:05:11.145957 ignition[725]: Ignition finished successfully May 14 18:05:11.201079 ignition[829]: Ignition 2.21.0 May 14 18:05:11.201101 ignition[829]: Stage: fetch May 14 18:05:11.201356 ignition[829]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:11.201375 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:11.201519 ignition[829]: parsed url from cmdline: "" May 14 18:05:11.201525 ignition[829]: no config URL provided May 14 18:05:11.201535 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:05:11.201548 ignition[829]: no config at "/usr/lib/ignition/user.ign" May 14 18:05:11.201604 ignition[829]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 14 18:05:11.218332 ignition[829]: GET result: OK May 14 18:05:11.219346 ignition[829]: parsing config with SHA512: 993af20e4ab21b22fb37daaa7a1474c87863281b43c1ce79318085fae1033ff6978a68be85ceeb6c2519003870bea327b5f7a7d8ea8350dbeb39d888aecc44a3 May 14 18:05:11.224927 unknown[829]: fetched base config from "system" May 14 18:05:11.225696 unknown[829]: fetched base config from "system" May 14 18:05:11.226119 unknown[829]: fetched user config from "digitalocean" May 14 18:05:11.226522 ignition[829]: fetch: fetch complete May 14 18:05:11.226528 ignition[829]: fetch: fetch passed May 14 18:05:11.226596 ignition[829]: Ignition finished successfully May 14 18:05:11.230145 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 18:05:11.233518 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:05:11.287178 ignition[835]: Ignition 2.21.0 May 14 18:05:11.287198 ignition[835]: Stage: kargs May 14 18:05:11.287451 ignition[835]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:11.287468 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:11.290433 ignition[835]: kargs: kargs passed May 14 18:05:11.290499 ignition[835]: Ignition finished successfully May 14 18:05:11.294287 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:05:11.297172 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:05:11.332553 ignition[842]: Ignition 2.21.0 May 14 18:05:11.332569 ignition[842]: Stage: disks May 14 18:05:11.333146 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 14 18:05:11.333174 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:11.336309 ignition[842]: disks: disks passed May 14 18:05:11.336441 ignition[842]: Ignition finished successfully May 14 18:05:11.338354 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:05:11.339662 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:05:11.340253 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:05:11.341346 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:05:11.342328 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:05:11.343193 systemd[1]: Reached target basic.target - Basic System. May 14 18:05:11.345258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:05:11.390615 systemd-fsck[850]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:05:11.394967 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:05:11.398190 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:05:11.548382 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:05:11.549693 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:05:11.550860 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:05:11.556461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:05:11.559017 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:05:11.562849 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 14 18:05:11.569064 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 18:05:11.570287 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:05:11.570424 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:05:11.591318 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (858) May 14 18:05:11.593083 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:11.594925 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:05:11.597904 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:11.597938 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:11.615733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:05:11.623027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:05:11.714342 coreos-metadata[861]: May 14 18:05:11.714 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:05:11.722459 coreos-metadata[860]: May 14 18:05:11.722 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:05:11.726773 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:05:11.728219 coreos-metadata[861]: May 14 18:05:11.728 INFO Fetch successful May 14 18:05:11.736659 coreos-metadata[860]: May 14 18:05:11.736 INFO Fetch successful May 14 18:05:11.739792 coreos-metadata[861]: May 14 18:05:11.739 INFO wrote hostname ci-4334.0.0-a-4c74b6421c to /sysroot/etc/hostname May 14 18:05:11.742282 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:05:11.745020 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory May 14 18:05:11.745310 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 14 18:05:11.745463 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 14 18:05:11.752851 initrd-setup-root[904]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:05:11.760881 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:05:11.905263 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:05:11.908206 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:05:11.911183 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:05:11.929573 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:05:11.930310 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:11.956358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:05:11.970333 ignition[980]: INFO : Ignition 2.21.0 May 14 18:05:11.970333 ignition[980]: INFO : Stage: mount May 14 18:05:11.971703 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:11.971703 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:11.973142 ignition[980]: INFO : mount: mount passed May 14 18:05:11.973142 ignition[980]: INFO : Ignition finished successfully May 14 18:05:11.973535 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:05:11.976355 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:05:12.007581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:05:12.040034 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (991) May 14 18:05:12.044462 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:05:12.045588 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:05:12.045685 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:05:12.054383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:05:12.099524 ignition[1008]: INFO : Ignition 2.21.0 May 14 18:05:12.099524 ignition[1008]: INFO : Stage: files May 14 18:05:12.100831 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:12.100831 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:12.103685 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping May 14 18:05:12.105300 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:05:12.105300 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:05:12.111663 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:05:12.112840 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:05:12.114417 unknown[1008]: wrote ssh authorized keys file for user: core May 14 18:05:12.115693 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:05:12.117190 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:05:12.117190 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:05:12.173395 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:05:12.598692 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:05:12.598692 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:05:12.602169 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:05:12.618578 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:05:12.618578 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:12.618578 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:12.618578 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:12.618578 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 18:05:12.738500 systemd-networkd[819]: eth1: Gained IPv6LL May 14 18:05:13.005454 systemd-networkd[819]: eth0: Gained IPv6LL May 14 18:05:13.085566 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:05:13.737064 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:05:13.739768 ignition[1008]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:05:13.741452 ignition[1008]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:05:13.745095 ignition[1008]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:05:13.745095 ignition[1008]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:05:13.745095 ignition[1008]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 18:05:13.745095 ignition[1008]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:05:13.751118 ignition[1008]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:05:13.751118 ignition[1008]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:05:13.751118 ignition[1008]: INFO : files: files passed May 14 18:05:13.751118 ignition[1008]: INFO : Ignition finished successfully May 14 18:05:13.748103 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:05:13.754241 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:05:13.757437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:05:13.790546 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:05:13.790784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:05:13.808758 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:13.808758 initrd-setup-root-after-ignition[1038]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:13.810684 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:05:13.812375 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:05:13.813757 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:05:13.816053 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:05:13.900381 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:05:13.900579 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:05:13.902686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:05:13.903460 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:05:13.904804 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:05:13.907237 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:05:13.965243 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:05:13.967802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:05:14.023291 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:05:14.024681 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:05:14.025283 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:05:14.025780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:05:14.025959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:05:14.026765 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:05:14.028189 systemd[1]: Stopped target basic.target - Basic System. May 14 18:05:14.031695 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:05:14.032227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:05:14.034716 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:05:14.037910 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:05:14.039375 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:05:14.040764 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:05:14.043241 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:05:14.044414 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:05:14.045436 systemd[1]: Stopped target swap.target - Swaps. May 14 18:05:14.046311 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:05:14.046546 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:05:14.048073 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:05:14.049388 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:05:14.050329 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:05:14.050553 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:05:14.051228 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:05:14.051444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:05:14.052843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:05:14.053222 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:05:14.054291 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:05:14.054538 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:05:14.055497 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 18:05:14.055739 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 18:05:14.059348 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:05:14.060268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:05:14.060586 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:05:14.065768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:05:14.066936 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:05:14.067998 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:05:14.075381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:05:14.075853 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:05:14.084836 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:05:14.086109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:05:14.133613 ignition[1062]: INFO : Ignition 2.21.0 May 14 18:05:14.136114 ignition[1062]: INFO : Stage: umount May 14 18:05:14.136114 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:05:14.136114 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 14 18:05:14.139271 ignition[1062]: INFO : umount: umount passed May 14 18:05:14.139271 ignition[1062]: INFO : Ignition finished successfully May 14 18:05:14.136633 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:05:14.141907 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:05:14.142176 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:05:14.143773 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:05:14.144119 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:05:14.147767 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:05:14.147908 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:05:14.149329 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:05:14.149426 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:05:14.150359 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 18:05:14.150451 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 18:05:14.152327 systemd[1]: Stopped target network.target - Network. May 14 18:05:14.156833 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:05:14.157103 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:05:14.157908 systemd[1]: Stopped target paths.target - Path Units. May 14 18:05:14.158343 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:05:14.162192 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:05:14.162775 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:05:14.167150 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:05:14.168283 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:05:14.168376 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:05:14.168930 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:05:14.169036 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:05:14.170158 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:05:14.170293 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:05:14.171251 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:05:14.171332 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:05:14.172831 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:05:14.172939 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:05:14.176748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:05:14.181179 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:05:14.186213 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:05:14.186820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:05:14.193807 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:05:14.194421 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:05:14.194623 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:05:14.197475 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:05:14.200482 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:05:14.201890 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:05:14.202671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:05:14.205323 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:05:14.205996 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:05:14.206136 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:05:14.206763 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:05:14.206834 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:05:14.207840 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:05:14.207909 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:05:14.208621 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:05:14.208702 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:05:14.209916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:05:14.217930 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:05:14.219378 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:05:14.231282 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:05:14.231589 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:05:14.237900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:05:14.238437 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:05:14.239406 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:05:14.239466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:05:14.240296 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:05:14.240413 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:05:14.242472 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:05:14.242557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:05:14.247055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:05:14.247192 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:05:14.254188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:05:14.254826 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:05:14.254932 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:05:14.256526 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:05:14.256603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:05:14.260332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:05:14.260430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:14.266387 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:05:14.266514 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:05:14.266586 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:05:14.268400 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:05:14.269588 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:05:14.282062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:05:14.282955 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:05:14.284491 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:05:14.291704 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:05:14.323408 systemd[1]: Switching root. May 14 18:05:14.410061 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). May 14 18:05:14.410252 systemd-journald[210]: Journal stopped May 14 18:05:18.015775 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:05:18.015881 kernel: SELinux: policy capability open_perms=1 May 14 18:05:18.015897 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:05:18.015910 kernel: SELinux: policy capability always_check_network=0 May 14 18:05:18.015922 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:05:18.015939 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:05:18.015958 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:05:18.042005 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:05:18.042059 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:05:18.042073 kernel: audit: type=1403 audit(1747245916.533:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:05:18.042089 systemd[1]: Successfully loaded SELinux policy in 79.455ms. May 14 18:05:18.042122 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.481ms. May 14 18:05:18.042210 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:05:18.042224 systemd[1]: Detected virtualization kvm. May 14 18:05:18.042248 systemd[1]: Detected architecture x86-64. May 14 18:05:18.042263 systemd[1]: Detected first boot. May 14 18:05:18.042276 systemd[1]: Hostname set to . May 14 18:05:18.042290 systemd[1]: Initializing machine ID from VM UUID. May 14 18:05:18.042304 zram_generator::config[1107]: No configuration found. May 14 18:05:18.042319 kernel: Guest personality initialized and is inactive May 14 18:05:18.042331 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:05:18.042344 kernel: Initialized host personality May 14 18:05:18.042357 kernel: NET: Registered PF_VSOCK protocol family May 14 18:05:18.042375 systemd[1]: Populated /etc with preset unit settings. May 14 18:05:18.042393 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:05:18.042406 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:05:18.042419 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:05:18.042433 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:05:18.042446 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:05:18.042460 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:05:18.042473 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:05:18.042491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:05:18.042505 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:05:18.042518 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:05:18.042531 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:05:18.042544 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:05:18.042558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:05:18.042572 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:05:18.042595 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:05:18.042608 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:05:18.042635 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:05:18.042649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:05:18.042661 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:05:18.042674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:05:18.042687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:05:18.042699 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:05:18.042718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:05:18.042731 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:05:18.042744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:05:18.042757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:05:18.042771 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:05:18.042785 systemd[1]: Reached target slices.target - Slice Units. May 14 18:05:18.042797 systemd[1]: Reached target swap.target - Swaps. May 14 18:05:18.042810 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:05:18.042823 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:05:18.042842 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:05:18.042855 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:05:18.042868 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:05:18.042880 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:05:18.042893 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:05:18.042905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:05:18.042927 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:05:18.042940 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:05:18.042953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:18.042989 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:05:18.043003 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:05:18.043016 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:05:18.043029 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:05:18.043042 systemd[1]: Reached target machines.target - Containers. May 14 18:05:18.043055 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:05:18.043068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:05:18.043080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:05:18.043093 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:05:18.043112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:05:18.043126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:05:18.043138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:05:18.043151 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:05:18.043163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:05:18.043177 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:05:18.043190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:05:18.043203 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:05:18.043224 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:05:18.043240 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:05:18.043254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:05:18.043268 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:05:18.043286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:05:18.043304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:05:18.043318 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:05:18.043331 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:05:18.043344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:05:18.043358 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:05:18.043376 systemd[1]: Stopped verity-setup.service. May 14 18:05:18.043390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:18.043403 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:05:18.043417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:05:18.043431 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:05:18.043443 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:05:18.043467 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:05:18.043491 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:05:18.043511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:05:18.043531 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:05:18.043544 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:05:18.043558 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:05:18.043575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:05:18.043589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:05:18.043602 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:05:18.043615 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:05:18.043629 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:05:18.043642 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:05:18.043660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:05:18.043674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:05:18.043690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:05:18.043707 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:05:18.043726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:05:18.043746 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:05:18.043766 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:05:18.043786 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:05:18.043816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:05:18.043836 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:05:18.043863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:05:18.043883 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:05:18.043903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:05:18.043921 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:05:18.043939 kernel: ACPI: bus type drm_connector registered May 14 18:05:18.043965 kernel: fuse: init (API version 7.41) May 14 18:05:18.059942 kernel: loop: module loaded May 14 18:05:18.060004 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:05:18.060053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:05:18.060074 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:05:18.060096 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:05:18.060122 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:05:18.060202 systemd-journald[1174]: Collecting audit messages is disabled. May 14 18:05:18.060245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:05:18.060275 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:05:18.060302 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:05:18.060324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:05:18.060347 systemd-journald[1174]: Journal started May 14 18:05:18.060385 systemd-journald[1174]: Runtime Journal (/run/log/journal/a98f4b55294b4ade8f598053a0112a66) is 4.9M, max 39.5M, 34.6M free. May 14 18:05:17.531522 systemd[1]: Queued start job for default target multi-user.target. May 14 18:05:18.062304 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:05:17.545858 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:05:17.546409 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:05:18.099549 kernel: loop0: detected capacity change from 0 to 8 May 14 18:05:18.085349 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:05:18.121502 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:05:18.144058 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:05:18.146811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:05:18.153370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:05:18.180070 kernel: loop1: detected capacity change from 0 to 205544 May 14 18:05:18.202388 systemd-journald[1174]: Time spent on flushing to /var/log/journal/a98f4b55294b4ade8f598053a0112a66 is 130.909ms for 1012 entries. May 14 18:05:18.202388 systemd-journald[1174]: System Journal (/var/log/journal/a98f4b55294b4ade8f598053a0112a66) is 8M, max 195.6M, 187.6M free. May 14 18:05:18.357241 systemd-journald[1174]: Received client request to flush runtime journal. May 14 18:05:18.357340 kernel: loop2: detected capacity change from 0 to 113872 May 14 18:05:18.357376 kernel: loop3: detected capacity change from 0 to 146240 May 14 18:05:18.246521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:05:18.255290 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:05:18.276577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:05:18.313156 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:05:18.320191 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:05:18.365231 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:05:18.424215 kernel: loop4: detected capacity change from 0 to 8 May 14 18:05:18.430576 kernel: loop5: detected capacity change from 0 to 205544 May 14 18:05:18.474240 kernel: loop6: detected capacity change from 0 to 113872 May 14 18:05:18.486489 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:05:18.497540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:05:18.515303 kernel: loop7: detected capacity change from 0 to 146240 May 14 18:05:18.548807 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:05:18.558295 (sd-merge)[1251]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 14 18:05:18.562068 (sd-merge)[1251]: Merged extensions into '/usr'. May 14 18:05:18.578639 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:05:18.578666 systemd[1]: Reloading... May 14 18:05:18.687762 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 14 18:05:18.687791 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 14 18:05:18.759009 zram_generator::config[1281]: No configuration found. May 14 18:05:18.834187 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:05:18.954766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:05:19.064507 systemd[1]: Reloading finished in 485 ms. May 14 18:05:19.082860 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:05:19.084652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:05:19.085683 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:05:19.102257 systemd[1]: Starting ensure-sysext.service... May 14 18:05:19.106222 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:05:19.147043 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... May 14 18:05:19.147061 systemd[1]: Reloading... May 14 18:05:19.174747 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:05:19.174781 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:05:19.175060 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:05:19.175328 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:05:19.177488 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:05:19.177910 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 14 18:05:19.178013 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. May 14 18:05:19.184572 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:05:19.184585 systemd-tmpfiles[1326]: Skipping /boot May 14 18:05:19.229330 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:05:19.229346 systemd-tmpfiles[1326]: Skipping /boot May 14 18:05:19.311074 zram_generator::config[1356]: No configuration found. May 14 18:05:19.457921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:05:19.572135 systemd[1]: Reloading finished in 424 ms. May 14 18:05:19.583885 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:05:19.584922 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:05:19.608253 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:05:19.612315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:05:19.617583 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:05:19.624062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:05:19.630427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:05:19.634302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:05:19.646505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.646828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:05:19.650537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:05:19.661417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:05:19.669749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:05:19.670479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:05:19.670628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:05:19.670720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.674040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.674251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:05:19.674425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:05:19.674504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:05:19.674587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.683159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:05:19.687874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.688752 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:05:19.699537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:05:19.701220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:05:19.701384 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:05:19.701530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:19.705504 systemd[1]: Finished ensure-sysext.service. May 14 18:05:19.715410 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:05:19.721593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:05:19.733346 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:05:19.747527 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:05:19.754589 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:05:19.758060 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:05:19.760101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:05:19.763075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:05:19.763891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:05:19.765403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:05:19.767140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:05:19.775062 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:05:19.776070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:05:19.780754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:05:19.787619 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:05:19.789097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:05:19.804107 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:05:19.807410 systemd-udevd[1402]: Using default interface naming scheme 'v255'. May 14 18:05:19.842458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:05:19.847673 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:05:19.866406 augenrules[1446]: No rules May 14 18:05:19.867379 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:05:19.867736 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:05:19.872863 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:05:20.067790 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 14 18:05:20.071454 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 14 18:05:20.073020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:20.073208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:05:20.074958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:05:20.079362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:05:20.082413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:05:20.083082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:05:20.083129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:05:20.083159 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:05:20.083182 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:05:20.140416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:05:20.140658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:05:20.154062 kernel: ISO 9660 Extensions: RRIP_1991A May 14 18:05:20.159597 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 14 18:05:20.169391 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:05:20.171276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:05:20.176666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:05:20.177511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:05:20.180560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:05:20.180664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:05:20.272198 systemd-networkd[1440]: lo: Link UP May 14 18:05:20.272208 systemd-networkd[1440]: lo: Gained carrier May 14 18:05:20.314362 systemd-resolved[1401]: Positive Trust Anchors: May 14 18:05:20.314391 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:05:20.314442 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:05:20.325340 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:05:20.326477 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:05:20.334743 systemd-resolved[1401]: Using system hostname 'ci-4334.0.0-a-4c74b6421c'. May 14 18:05:20.340044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:05:20.340613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:05:20.342124 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:05:20.342881 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:05:20.343395 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:05:20.343856 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:05:20.344590 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:05:20.346244 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:05:20.346744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:05:20.347363 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:05:20.347426 systemd[1]: Reached target paths.target - Path Units. May 14 18:05:20.347883 systemd[1]: Reached target timers.target - Timer Units. May 14 18:05:20.349197 systemd-networkd[1440]: Enumeration completed May 14 18:05:20.350784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:05:20.354675 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:05:20.361537 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:05:20.362723 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:05:20.364112 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:05:20.373890 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:05:20.375590 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:05:20.376939 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:05:20.377872 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:05:20.380601 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:05:20.385751 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:05:20.387618 systemd[1]: Reached target network.target - Network. May 14 18:05:20.388047 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:05:20.388364 systemd[1]: Reached target basic.target - Basic System. May 14 18:05:20.388730 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:05:20.388761 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:05:20.392176 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:05:20.395390 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 18:05:20.401510 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:05:20.407484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:05:20.413541 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:05:20.420359 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:05:20.422120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:05:20.426073 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:05:20.430560 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:05:20.440332 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:05:20.453269 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:05:20.462106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:05:20.470913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:05:20.474270 jq[1507]: false May 14 18:05:20.485195 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Refreshing passwd entry cache May 14 18:05:20.486719 oslogin_cache_refresh[1509]: Refreshing passwd entry cache May 14 18:05:20.491824 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:05:20.494832 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Failure getting users, quitting May 14 18:05:20.499059 oslogin_cache_refresh[1509]: Failure getting users, quitting May 14 18:05:20.500374 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:05:20.500374 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Refreshing group entry cache May 14 18:05:20.500374 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Failure getting groups, quitting May 14 18:05:20.500374 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:05:20.499123 oslogin_cache_refresh[1509]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:05:20.499201 oslogin_cache_refresh[1509]: Refreshing group entry cache May 14 18:05:20.500119 oslogin_cache_refresh[1509]: Failure getting groups, quitting May 14 18:05:20.500138 oslogin_cache_refresh[1509]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:05:20.505435 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:05:20.515302 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:05:20.518172 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:05:20.519740 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:05:20.524409 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:05:20.538387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:05:20.554319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:05:20.555334 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:05:20.555596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:05:20.556105 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:05:20.556354 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:05:20.567810 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:05:20.569185 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:05:20.613225 update_engine[1521]: I20250514 18:05:20.608911 1521 main.cc:92] Flatcar Update Engine starting May 14 18:05:20.628015 extend-filesystems[1508]: Found loop4 May 14 18:05:20.628015 extend-filesystems[1508]: Found loop5 May 14 18:05:20.628015 extend-filesystems[1508]: Found loop6 May 14 18:05:20.628015 extend-filesystems[1508]: Found loop7 May 14 18:05:20.628015 extend-filesystems[1508]: Found vda May 14 18:05:20.628015 extend-filesystems[1508]: Found vda1 May 14 18:05:20.628015 extend-filesystems[1508]: Found vda2 May 14 18:05:20.628015 extend-filesystems[1508]: Found vda3 May 14 18:05:20.633198 jq[1523]: true May 14 18:05:20.641004 extend-filesystems[1508]: Found usr May 14 18:05:20.641004 extend-filesystems[1508]: Found vda4 May 14 18:05:20.641004 extend-filesystems[1508]: Found vda6 May 14 18:05:20.641004 extend-filesystems[1508]: Found vda7 May 14 18:05:20.641004 extend-filesystems[1508]: Found vda9 May 14 18:05:20.641004 extend-filesystems[1508]: Checking size of /dev/vda9 May 14 18:05:20.650249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:05:20.681495 tar[1529]: linux-amd64/helm May 14 18:05:20.687501 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:05:20.692218 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:05:20.693952 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:05:20.729024 coreos-metadata[1504]: May 14 18:05:20.726 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:05:20.729024 coreos-metadata[1504]: May 14 18:05:20.726 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) May 14 18:05:20.743187 extend-filesystems[1508]: Resized partition /dev/vda9 May 14 18:05:20.739715 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:05:20.750044 extend-filesystems[1553]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:05:20.751813 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 14 18:05:20.757812 jq[1544]: true May 14 18:05:20.756376 systemd-logind[1517]: New seat seat0. May 14 18:05:20.769747 dbus-daemon[1505]: [system] SELinux support is enabled May 14 18:05:20.758542 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:05:20.770015 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:05:20.780550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:05:20.781080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:05:20.781820 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:05:20.781958 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 14 18:05:20.782931 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:05:20.800600 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:05:20.803682 dbus-daemon[1505]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 18:05:20.814649 systemd[1]: Started update-engine.service - Update Engine. May 14 18:05:20.815116 update_engine[1521]: I20250514 18:05:20.814761 1521 update_check_scheduler.cc:74] Next update check in 2m38s May 14 18:05:20.821521 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:05:20.840185 systemd-networkd[1440]: eth0: Configuring with /run/systemd/network/10-ea:21:00:25:4d:db.network. May 14 18:05:20.847646 systemd-networkd[1440]: eth1: Configuring with /run/systemd/network/10-22:7f:b6:fb:96:39.network. May 14 18:05:20.848640 systemd-networkd[1440]: eth0: Link UP May 14 18:05:20.852074 systemd-networkd[1440]: eth0: Gained carrier May 14 18:05:20.857633 systemd-networkd[1440]: eth1: Link UP May 14 18:05:20.858552 systemd-networkd[1440]: eth1: Gained carrier May 14 18:05:20.878213 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. May 14 18:05:20.953011 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 14 18:05:21.005793 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:05:21.005793 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 8 May 14 18:05:21.005793 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 14 18:05:21.011364 extend-filesystems[1508]: Resized filesystem in /dev/vda9 May 14 18:05:21.011364 extend-filesystems[1508]: Found vdb May 14 18:05:21.006603 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:05:21.007715 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:05:21.018194 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 14 18:05:21.019137 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:05:21.027107 bash[1570]: Updated "/home/core/.ssh/authorized_keys" May 14 18:05:21.028689 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:05:21.030016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 14 18:05:21.035010 kernel: ACPI: button: Power Button [PWRF] May 14 18:05:21.037510 systemd[1]: Starting sshkeys.service... May 14 18:05:21.081856 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 18:05:21.086956 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 18:05:21.196000 coreos-metadata[1583]: May 14 18:05:21.195 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 14 18:05:21.214940 coreos-metadata[1583]: May 14 18:05:21.214 INFO Fetch successful May 14 18:05:21.235153 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:05:21.241908 unknown[1583]: wrote ssh authorized keys file for user: core May 14 18:05:21.298745 update-ssh-keys[1593]: Updated "/home/core/.ssh/authorized_keys" May 14 18:05:21.300077 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 18:05:21.309241 systemd[1]: Finished sshkeys.service. May 14 18:05:21.321211 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:05:21.332499 containerd[1545]: time="2025-05-14T18:05:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:05:21.332499 containerd[1545]: time="2025-05-14T18:05:21.331269683Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:05:21.376056 containerd[1545]: time="2025-05-14T18:05:21.375794908Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.366µs" May 14 18:05:21.376056 containerd[1545]: time="2025-05-14T18:05:21.375869703Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:05:21.376056 containerd[1545]: time="2025-05-14T18:05:21.375896067Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:05:21.376305 containerd[1545]: time="2025-05-14T18:05:21.376128343Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:05:21.376305 containerd[1545]: time="2025-05-14T18:05:21.376146790Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:05:21.376305 containerd[1545]: time="2025-05-14T18:05:21.376173899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:05:21.376305 containerd[1545]: time="2025-05-14T18:05:21.376232222Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:05:21.376305 containerd[1545]: time="2025-05-14T18:05:21.376243554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:05:21.376780 containerd[1545]: time="2025-05-14T18:05:21.376523900Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:05:21.376780 containerd[1545]: time="2025-05-14T18:05:21.376548401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:05:21.376780 containerd[1545]: time="2025-05-14T18:05:21.376567943Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:05:21.376780 containerd[1545]: time="2025-05-14T18:05:21.376579038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:05:21.376780 containerd[1545]: time="2025-05-14T18:05:21.376681901Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:05:21.377108 containerd[1545]: time="2025-05-14T18:05:21.376957090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:05:21.377108 containerd[1545]: time="2025-05-14T18:05:21.377032397Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:05:21.377108 containerd[1545]: time="2025-05-14T18:05:21.377047867Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:05:21.377223 containerd[1545]: time="2025-05-14T18:05:21.377117815Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:05:21.378441 containerd[1545]: time="2025-05-14T18:05:21.377418712Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:05:21.378441 containerd[1545]: time="2025-05-14T18:05:21.377535014Z" level=info msg="metadata content store policy set" policy=shared May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380562734Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380655207Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380681409Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380693710Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380734696Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380748798Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380763655Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380789287Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380804196Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380817710Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380836036Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.380853028Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.381045350Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:05:21.381262 containerd[1545]: time="2025-05-14T18:05:21.381095266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381119944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381131782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381143661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381154767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381166024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381188256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381203274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381219409Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381233941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381307293Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381321450Z" level=info msg="Start snapshots syncer" May 14 18:05:21.382179 containerd[1545]: time="2025-05-14T18:05:21.381373017Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:05:21.382480 containerd[1545]: time="2025-05-14T18:05:21.381656135Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:05:21.382480 containerd[1545]: time="2025-05-14T18:05:21.381747763Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.381846118Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382008279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382034591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382047612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382059476Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382071990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382082897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382092848Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382159112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382189104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382203531Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382226827Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382241588Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:05:21.382732 containerd[1545]: time="2025-05-14T18:05:21.382250849Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382259782Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382266776Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382274814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382284967Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382303017Z" level=info msg="runtime interface created" May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382308221Z" level=info msg="created NRI interface" May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382316670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382328945Z" level=info msg="Connect containerd service" May 14 18:05:21.383198 containerd[1545]: time="2025-05-14T18:05:21.382354195Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:05:21.392458 containerd[1545]: time="2025-05-14T18:05:21.384328298Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.593902857Z" level=info msg="Start subscribing containerd event" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.593995585Z" level=info msg="Start recovering state" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594128357Z" level=info msg="Start event monitor" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594145920Z" level=info msg="Start cni network conf syncer for default" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594153839Z" level=info msg="Start streaming server" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594163358Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594170441Z" level=info msg="runtime interface starting up..." May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594176441Z" level=info msg="starting plugins..." May 14 18:05:21.596327 containerd[1545]: time="2025-05-14T18:05:21.594191831Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:05:21.597376 containerd[1545]: time="2025-05-14T18:05:21.597090213Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:05:21.597376 containerd[1545]: time="2025-05-14T18:05:21.597247454Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:05:21.597376 containerd[1545]: time="2025-05-14T18:05:21.597340775Z" level=info msg="containerd successfully booted in 0.271930s" May 14 18:05:21.597490 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:05:21.634394 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 14 18:05:21.634486 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 14 18:05:21.644008 kernel: Console: switching to colour dummy device 80x25 May 14 18:05:21.645387 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 18:05:21.645513 kernel: [drm] features: -context_init May 14 18:05:21.727734 coreos-metadata[1504]: May 14 18:05:21.726 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 May 14 18:05:21.744489 coreos-metadata[1504]: May 14 18:05:21.743 INFO Fetch successful May 14 18:05:21.830417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:21.847266 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:05:21.848076 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 18:05:21.848652 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:05:21.854038 kernel: [drm] number of scanouts: 1 May 14 18:05:21.963995 kernel: [drm] number of cap sets: 0 May 14 18:05:21.974349 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:05:21.978413 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:05:21.983586 systemd[1]: Started sshd@0-165.232.128.115:22-139.178.89.65:51094.service - OpenSSH per-connection server daemon (139.178.89.65:51094). May 14 18:05:22.001521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:22.005024 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 14 18:05:22.006173 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:05:22.007190 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:05:22.015469 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:05:22.032156 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:05:22.091279 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:05:22.098141 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:05:22.101749 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:05:22.102078 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:05:22.109330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:05:22.109571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:22.109957 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:22.114661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:05:22.117548 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:05:22.141374 systemd-logind[1517]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:05:22.146962 systemd-networkd[1440]: eth1: Gained IPv6LL May 14 18:05:22.151720 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. May 14 18:05:22.158270 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:05:22.159660 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:05:22.169893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:22.172424 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:05:22.234557 sshd[1641]: Accepted publickey for core from 139.178.89.65 port 51094 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:22.240789 kernel: EDAC MC: Ver: 3.0.0 May 14 18:05:22.247456 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:22.264085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:05:22.265509 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:05:22.301326 systemd-logind[1517]: New session 1 of user core. May 14 18:05:22.306790 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:05:22.316165 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:05:22.319648 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:05:22.322462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:05:22.340556 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:05:22.347495 systemd-logind[1517]: New session c1 of user core. May 14 18:05:22.467133 systemd-networkd[1440]: eth0: Gained IPv6LL May 14 18:05:22.470110 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. May 14 18:05:22.558048 systemd[1674]: Queued start job for default target default.target. May 14 18:05:22.564760 systemd[1674]: Created slice app.slice - User Application Slice. May 14 18:05:22.564803 systemd[1674]: Reached target paths.target - Paths. May 14 18:05:22.564852 systemd[1674]: Reached target timers.target - Timers. May 14 18:05:22.567103 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:05:22.572274 tar[1529]: linux-amd64/LICENSE May 14 18:05:22.572274 tar[1529]: linux-amd64/README.md May 14 18:05:22.602184 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:05:22.602358 systemd[1674]: Reached target sockets.target - Sockets. May 14 18:05:22.602415 systemd[1674]: Reached target basic.target - Basic System. May 14 18:05:22.602454 systemd[1674]: Reached target default.target - Main User Target. May 14 18:05:22.602490 systemd[1674]: Startup finished in 242ms. May 14 18:05:22.602734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:05:22.603364 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:05:22.613403 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:05:22.693430 systemd[1]: Started sshd@1-165.232.128.115:22-139.178.89.65:51108.service - OpenSSH per-connection server daemon (139.178.89.65:51108). May 14 18:05:22.785235 sshd[1689]: Accepted publickey for core from 139.178.89.65 port 51108 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:22.787441 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:22.794591 systemd-logind[1517]: New session 2 of user core. May 14 18:05:22.801289 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:05:22.870383 sshd[1691]: Connection closed by 139.178.89.65 port 51108 May 14 18:05:22.871159 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 14 18:05:22.886337 systemd[1]: sshd@1-165.232.128.115:22-139.178.89.65:51108.service: Deactivated successfully. May 14 18:05:22.889051 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:05:22.891084 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. May 14 18:05:22.900411 systemd[1]: Started sshd@2-165.232.128.115:22-139.178.89.65:51116.service - OpenSSH per-connection server daemon (139.178.89.65:51116). May 14 18:05:22.903908 systemd-logind[1517]: Removed session 2. May 14 18:05:22.974884 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 51116 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:22.977204 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:22.985077 systemd-logind[1517]: New session 3 of user core. May 14 18:05:22.991277 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:05:23.061745 sshd[1699]: Connection closed by 139.178.89.65 port 51116 May 14 18:05:23.063430 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 14 18:05:23.070700 systemd[1]: sshd@2-165.232.128.115:22-139.178.89.65:51116.service: Deactivated successfully. May 14 18:05:23.074295 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:05:23.076352 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. May 14 18:05:23.079258 systemd-logind[1517]: Removed session 3. May 14 18:05:23.504233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:23.508004 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:05:23.508539 systemd[1]: Startup finished in 4.181s (kernel) + 8.825s (initrd) + 7.052s (userspace) = 20.059s. May 14 18:05:23.518117 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:05:24.291551 kubelet[1708]: E0514 18:05:24.291483 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:05:24.295437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:05:24.295666 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:05:24.296147 systemd[1]: kubelet.service: Consumed 1.252s CPU time, 234.2M memory peak. May 14 18:05:33.075694 systemd[1]: Started sshd@3-165.232.128.115:22-139.178.89.65:55068.service - OpenSSH per-connection server daemon (139.178.89.65:55068). May 14 18:05:33.159028 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 55068 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:33.160400 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:33.166860 systemd-logind[1517]: New session 4 of user core. May 14 18:05:33.175512 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:05:33.240056 sshd[1723]: Connection closed by 139.178.89.65 port 55068 May 14 18:05:33.240700 sshd-session[1721]: pam_unix(sshd:session): session closed for user core May 14 18:05:33.254586 systemd[1]: sshd@3-165.232.128.115:22-139.178.89.65:55068.service: Deactivated successfully. May 14 18:05:33.257446 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:05:33.259108 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. May 14 18:05:33.263391 systemd[1]: Started sshd@4-165.232.128.115:22-139.178.89.65:55076.service - OpenSSH per-connection server daemon (139.178.89.65:55076). May 14 18:05:33.264757 systemd-logind[1517]: Removed session 4. May 14 18:05:33.332882 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 55076 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:33.335076 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:33.341884 systemd-logind[1517]: New session 5 of user core. May 14 18:05:33.352686 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:05:33.415028 sshd[1731]: Connection closed by 139.178.89.65 port 55076 May 14 18:05:33.413663 sshd-session[1729]: pam_unix(sshd:session): session closed for user core May 14 18:05:33.427617 systemd[1]: sshd@4-165.232.128.115:22-139.178.89.65:55076.service: Deactivated successfully. May 14 18:05:33.430014 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:05:33.430863 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. May 14 18:05:33.435487 systemd[1]: Started sshd@5-165.232.128.115:22-139.178.89.65:55086.service - OpenSSH per-connection server daemon (139.178.89.65:55086). May 14 18:05:33.436602 systemd-logind[1517]: Removed session 5. May 14 18:05:33.506188 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 55086 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:33.507708 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:33.514079 systemd-logind[1517]: New session 6 of user core. May 14 18:05:33.523328 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:05:33.586072 sshd[1739]: Connection closed by 139.178.89.65 port 55086 May 14 18:05:33.586777 sshd-session[1737]: pam_unix(sshd:session): session closed for user core May 14 18:05:33.603314 systemd[1]: sshd@5-165.232.128.115:22-139.178.89.65:55086.service: Deactivated successfully. May 14 18:05:33.605414 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:05:33.607060 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. May 14 18:05:33.609702 systemd[1]: Started sshd@6-165.232.128.115:22-139.178.89.65:55090.service - OpenSSH per-connection server daemon (139.178.89.65:55090). May 14 18:05:33.612625 systemd-logind[1517]: Removed session 6. May 14 18:05:33.678275 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 55090 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:33.680190 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:33.686611 systemd-logind[1517]: New session 7 of user core. May 14 18:05:33.693300 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:05:33.764711 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:05:33.765590 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:05:33.781237 sudo[1748]: pam_unix(sudo:session): session closed for user root May 14 18:05:33.786964 sshd[1747]: Connection closed by 139.178.89.65 port 55090 May 14 18:05:33.786076 sshd-session[1745]: pam_unix(sshd:session): session closed for user core May 14 18:05:33.797796 systemd[1]: sshd@6-165.232.128.115:22-139.178.89.65:55090.service: Deactivated successfully. May 14 18:05:33.800322 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:05:33.802840 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. May 14 18:05:33.806149 systemd[1]: Started sshd@7-165.232.128.115:22-139.178.89.65:55104.service - OpenSSH per-connection server daemon (139.178.89.65:55104). May 14 18:05:33.808791 systemd-logind[1517]: Removed session 7. May 14 18:05:33.871715 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 55104 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:33.873511 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:33.880247 systemd-logind[1517]: New session 8 of user core. May 14 18:05:33.890358 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:05:33.957623 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:05:33.958119 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:05:33.964943 sudo[1758]: pam_unix(sudo:session): session closed for user root May 14 18:05:33.973330 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:05:33.973714 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:05:33.987674 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:05:34.042951 augenrules[1780]: No rules May 14 18:05:34.044385 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:05:34.044776 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:05:34.046489 sudo[1757]: pam_unix(sudo:session): session closed for user root May 14 18:05:34.051927 sshd[1756]: Connection closed by 139.178.89.65 port 55104 May 14 18:05:34.052566 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 14 18:05:34.062742 systemd[1]: sshd@7-165.232.128.115:22-139.178.89.65:55104.service: Deactivated successfully. May 14 18:05:34.065111 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:05:34.067318 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. May 14 18:05:34.070724 systemd[1]: Started sshd@8-165.232.128.115:22-139.178.89.65:55112.service - OpenSSH per-connection server daemon (139.178.89.65:55112). May 14 18:05:34.073248 systemd-logind[1517]: Removed session 8. May 14 18:05:34.140050 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 55112 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:05:34.142423 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:05:34.149807 systemd-logind[1517]: New session 9 of user core. May 14 18:05:34.155311 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:05:34.218944 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:05:34.219492 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:05:34.422460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:05:34.427242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:34.634215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:34.642485 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:05:34.732214 kubelet[1819]: E0514 18:05:34.731653 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:05:34.738878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:05:34.739390 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:05:34.740489 systemd[1]: kubelet.service: Consumed 239ms CPU time, 95.7M memory peak. May 14 18:05:34.839253 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:05:34.853814 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:05:35.318123 dockerd[1827]: time="2025-05-14T18:05:35.318021541Z" level=info msg="Starting up" May 14 18:05:35.319473 dockerd[1827]: time="2025-05-14T18:05:35.319401593Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:05:35.398778 dockerd[1827]: time="2025-05-14T18:05:35.398382388Z" level=info msg="Loading containers: start." May 14 18:05:35.411333 kernel: Initializing XFRM netlink socket May 14 18:05:35.787348 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. May 14 18:05:35.836365 systemd-timesyncd[1416]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). May 14 18:05:35.836690 systemd-timesyncd[1416]: Initial clock synchronization to Wed 2025-05-14 18:05:36.034557 UTC. May 14 18:05:35.865055 systemd-networkd[1440]: docker0: Link UP May 14 18:05:35.869625 dockerd[1827]: time="2025-05-14T18:05:35.868564417Z" level=info msg="Loading containers: done." May 14 18:05:35.899428 dockerd[1827]: time="2025-05-14T18:05:35.899346662Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:05:35.899827 dockerd[1827]: time="2025-05-14T18:05:35.899786976Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:05:35.900174 dockerd[1827]: time="2025-05-14T18:05:35.900093605Z" level=info msg="Initializing buildkit" May 14 18:05:35.911812 systemd[1]: Started sshd@9-165.232.128.115:22-45.249.8.86:38276.service - OpenSSH per-connection server daemon (45.249.8.86:38276). May 14 18:05:35.949213 dockerd[1827]: time="2025-05-14T18:05:35.948874280Z" level=info msg="Completed buildkit initialization" May 14 18:05:35.961607 dockerd[1827]: time="2025-05-14T18:05:35.961478926Z" level=info msg="Daemon has completed initialization" May 14 18:05:35.963823 dockerd[1827]: time="2025-05-14T18:05:35.961875511Z" level=info msg="API listen on /run/docker.sock" May 14 18:05:35.963600 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:05:36.557131 sshd[2000]: Connection closed by 45.249.8.86 port 38276 [preauth] May 14 18:05:36.558264 systemd[1]: sshd@9-165.232.128.115:22-45.249.8.86:38276.service: Deactivated successfully. May 14 18:05:37.065786 containerd[1545]: time="2025-05-14T18:05:37.065714582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:05:37.704806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627048241.mount: Deactivated successfully. May 14 18:05:39.138033 containerd[1545]: time="2025-05-14T18:05:39.136491581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:39.138033 containerd[1545]: time="2025-05-14T18:05:39.137896761Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 18:05:39.138033 containerd[1545]: time="2025-05-14T18:05:39.137943927Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:39.141124 containerd[1545]: time="2025-05-14T18:05:39.141072442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:39.142743 containerd[1545]: time="2025-05-14T18:05:39.142682851Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.076911574s" May 14 18:05:39.142925 containerd[1545]: time="2025-05-14T18:05:39.142907324Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 18:05:39.145439 containerd[1545]: time="2025-05-14T18:05:39.145394429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:05:40.892036 containerd[1545]: time="2025-05-14T18:05:40.891024432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:40.892596 containerd[1545]: time="2025-05-14T18:05:40.892512085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 18:05:40.893423 containerd[1545]: time="2025-05-14T18:05:40.893370402Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:40.898051 containerd[1545]: time="2025-05-14T18:05:40.897931175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:40.900044 containerd[1545]: time="2025-05-14T18:05:40.899960171Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.754312335s" May 14 18:05:40.900440 containerd[1545]: time="2025-05-14T18:05:40.900254584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 18:05:40.901683 containerd[1545]: time="2025-05-14T18:05:40.901200930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:05:42.197008 containerd[1545]: time="2025-05-14T18:05:42.196937543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:42.199339 containerd[1545]: time="2025-05-14T18:05:42.199281632Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 18:05:42.201037 containerd[1545]: time="2025-05-14T18:05:42.200571338Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:42.206942 containerd[1545]: time="2025-05-14T18:05:42.206877133Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.305628125s" May 14 18:05:42.207168 containerd[1545]: time="2025-05-14T18:05:42.207145501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 18:05:42.207794 containerd[1545]: time="2025-05-14T18:05:42.207727100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:42.208010 containerd[1545]: time="2025-05-14T18:05:42.207971808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:05:42.209767 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 14 18:05:43.368189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194513289.mount: Deactivated successfully. May 14 18:05:43.915990 containerd[1545]: time="2025-05-14T18:05:43.915925923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:43.917715 containerd[1545]: time="2025-05-14T18:05:43.917657181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 18:05:43.919011 containerd[1545]: time="2025-05-14T18:05:43.918638704Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:43.920944 containerd[1545]: time="2025-05-14T18:05:43.920895856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:43.923363 containerd[1545]: time="2025-05-14T18:05:43.922881316Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.714775696s" May 14 18:05:43.923363 containerd[1545]: time="2025-05-14T18:05:43.922945786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 18:05:43.925005 containerd[1545]: time="2025-05-14T18:05:43.924735584Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:05:44.457367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798475466.mount: Deactivated successfully. May 14 18:05:44.922695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:05:44.925372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:45.083279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:45.095314 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:05:45.179741 kubelet[2165]: E0514 18:05:45.179549 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:05:45.183647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:05:45.183796 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:05:45.185518 systemd[1]: kubelet.service: Consumed 191ms CPU time, 95.3M memory peak. May 14 18:05:45.315246 systemd-resolved[1401]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 14 18:05:45.442730 containerd[1545]: time="2025-05-14T18:05:45.442544225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:45.444507 containerd[1545]: time="2025-05-14T18:05:45.444003288Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:05:45.445334 containerd[1545]: time="2025-05-14T18:05:45.445277216Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:45.449107 containerd[1545]: time="2025-05-14T18:05:45.449049826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:45.451387 containerd[1545]: time="2025-05-14T18:05:45.451330043Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.526545052s" May 14 18:05:45.452052 containerd[1545]: time="2025-05-14T18:05:45.451580879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:05:45.452464 containerd[1545]: time="2025-05-14T18:05:45.452433729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:05:45.890860 systemd[1]: Started sshd@10-165.232.128.115:22-185.233.247.245:60688.service - OpenSSH per-connection server daemon (185.233.247.245:60688). May 14 18:05:45.934074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070585268.mount: Deactivated successfully. May 14 18:05:45.940002 containerd[1545]: time="2025-05-14T18:05:45.939874562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:05:45.941511 containerd[1545]: time="2025-05-14T18:05:45.941431075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:05:45.941990 containerd[1545]: time="2025-05-14T18:05:45.941942802Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:05:45.944209 containerd[1545]: time="2025-05-14T18:05:45.944132911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:05:45.945642 containerd[1545]: time="2025-05-14T18:05:45.945259542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.697498ms" May 14 18:05:45.945642 containerd[1545]: time="2025-05-14T18:05:45.945324030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:05:45.946108 containerd[1545]: time="2025-05-14T18:05:45.946084585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:05:46.296227 sshd[2173]: Connection closed by 185.233.247.245 port 60688 [preauth] May 14 18:05:46.297418 systemd[1]: sshd@10-165.232.128.115:22-185.233.247.245:60688.service: Deactivated successfully. May 14 18:05:46.452169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075040684.mount: Deactivated successfully. May 14 18:05:48.342233 containerd[1545]: time="2025-05-14T18:05:48.341774644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:48.343408 containerd[1545]: time="2025-05-14T18:05:48.343324121Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 18:05:48.343762 containerd[1545]: time="2025-05-14T18:05:48.343728330Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:48.346738 containerd[1545]: time="2025-05-14T18:05:48.346682570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:05:48.349031 containerd[1545]: time="2025-05-14T18:05:48.348584537Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.402463108s" May 14 18:05:48.349031 containerd[1545]: time="2025-05-14T18:05:48.348646346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 18:05:51.287195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:51.288250 systemd[1]: kubelet.service: Consumed 191ms CPU time, 95.3M memory peak. May 14 18:05:51.291476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:51.343186 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-9.scope)... May 14 18:05:51.343424 systemd[1]: Reloading... May 14 18:05:51.517017 zram_generator::config[2306]: No configuration found. May 14 18:05:51.679946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:05:51.868650 systemd[1]: Reloading finished in 524 ms. May 14 18:05:51.946263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:51.951269 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:05:51.951786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:51.951863 systemd[1]: kubelet.service: Consumed 142ms CPU time, 83.5M memory peak. May 14 18:05:51.954734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:52.131207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:52.144652 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:05:52.212756 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:05:52.213207 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:05:52.213269 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:05:52.214782 kubelet[2359]: I0514 18:05:52.214698 2359 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:05:52.572154 kubelet[2359]: I0514 18:05:52.571967 2359 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:05:52.572376 kubelet[2359]: I0514 18:05:52.572348 2359 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:05:52.572902 kubelet[2359]: I0514 18:05:52.572876 2359 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:05:52.600041 kubelet[2359]: I0514 18:05:52.599961 2359 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:05:52.602629 kubelet[2359]: E0514 18:05:52.602545 2359 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://165.232.128.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:52.618780 kubelet[2359]: I0514 18:05:52.618727 2359 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:05:52.625555 kubelet[2359]: I0514 18:05:52.625191 2359 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:05:52.626769 kubelet[2359]: I0514 18:05:52.626717 2359 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:05:52.627343 kubelet[2359]: I0514 18:05:52.627301 2359 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:05:52.627834 kubelet[2359]: I0514 18:05:52.627461 2359 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-4c74b6421c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:05:52.628115 kubelet[2359]: I0514 18:05:52.628093 2359 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:05:52.628210 kubelet[2359]: I0514 18:05:52.628200 2359 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:05:52.628444 kubelet[2359]: I0514 18:05:52.628427 2359 state_mem.go:36] "Initialized new in-memory state store" May 14 18:05:52.630993 kubelet[2359]: I0514 18:05:52.630939 2359 kubelet.go:408] "Attempting to sync node with API server" May 14 18:05:52.631166 kubelet[2359]: I0514 18:05:52.631151 2359 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:05:52.631320 kubelet[2359]: I0514 18:05:52.631308 2359 kubelet.go:314] "Adding apiserver pod source" May 14 18:05:52.631400 kubelet[2359]: I0514 18:05:52.631389 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:05:52.638028 kubelet[2359]: I0514 18:05:52.637597 2359 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:05:52.642729 kubelet[2359]: I0514 18:05:52.642464 2359 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:05:52.643350 kubelet[2359]: W0514 18:05:52.643325 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:05:52.644395 kubelet[2359]: I0514 18:05:52.644362 2359 server.go:1269] "Started kubelet" May 14 18:05:52.644763 kubelet[2359]: W0514 18:05:52.644707 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.128.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-4c74b6421c&limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:52.644903 kubelet[2359]: E0514 18:05:52.644879 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.128.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-4c74b6421c&limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:52.650247 kubelet[2359]: W0514 18:05:52.650004 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.128.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:52.650247 kubelet[2359]: E0514 18:05:52.650100 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.128.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:52.650482 kubelet[2359]: I0514 18:05:52.650324 2359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:05:52.652478 kubelet[2359]: I0514 18:05:52.651896 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:05:52.652478 kubelet[2359]: I0514 18:05:52.652420 2359 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:05:52.655677 kubelet[2359]: I0514 18:05:52.654831 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:05:52.662091 kubelet[2359]: I0514 18:05:52.662050 2359 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:05:52.662411 kubelet[2359]: E0514 18:05:52.653108 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.128.115:6443/api/v1/namespaces/default/events\": dial tcp 165.232.128.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334.0.0-a-4c74b6421c.183f76efc69aa00d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334.0.0-a-4c74b6421c,UID:ci-4334.0.0-a-4c74b6421c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334.0.0-a-4c74b6421c,},FirstTimestamp:2025-05-14 18:05:52.644325389 +0000 UTC m=+0.493223217,LastTimestamp:2025-05-14 18:05:52.644325389 +0000 UTC m=+0.493223217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334.0.0-a-4c74b6421c,}" May 14 18:05:52.662554 kubelet[2359]: I0514 18:05:52.659629 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:05:52.663304 kubelet[2359]: I0514 18:05:52.663283 2359 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:05:52.663417 kubelet[2359]: I0514 18:05:52.658066 2359 server.go:460] "Adding debug handlers to kubelet server" May 14 18:05:52.664839 kubelet[2359]: I0514 18:05:52.664802 2359 reconciler.go:26] "Reconciler: start to sync state" May 14 18:05:52.666955 kubelet[2359]: W0514 18:05:52.666874 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.128.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:52.667264 kubelet[2359]: E0514 18:05:52.667231 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.128.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:52.667907 kubelet[2359]: E0514 18:05:52.667875 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-4c74b6421c\" not found" May 14 18:05:52.668668 kubelet[2359]: E0514 18:05:52.668622 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.128.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-4c74b6421c?timeout=10s\": dial tcp 165.232.128.115:6443: connect: connection refused" interval="200ms" May 14 18:05:52.674960 kubelet[2359]: I0514 18:05:52.674902 2359 factory.go:221] Registration of the systemd container factory successfully May 14 18:05:52.675180 kubelet[2359]: I0514 18:05:52.675140 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:05:52.676726 kubelet[2359]: E0514 18:05:52.676683 2359 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:05:52.677855 kubelet[2359]: I0514 18:05:52.677721 2359 factory.go:221] Registration of the containerd container factory successfully May 14 18:05:52.705267 kubelet[2359]: I0514 18:05:52.705215 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:05:52.705442 kubelet[2359]: I0514 18:05:52.705278 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:05:52.705442 kubelet[2359]: I0514 18:05:52.705310 2359 state_mem.go:36] "Initialized new in-memory state store" May 14 18:05:52.708379 kubelet[2359]: I0514 18:05:52.708339 2359 policy_none.go:49] "None policy: Start" May 14 18:05:52.711329 kubelet[2359]: I0514 18:05:52.711274 2359 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:05:52.711329 kubelet[2359]: I0514 18:05:52.711320 2359 state_mem.go:35] "Initializing new in-memory state store" May 14 18:05:52.721968 kubelet[2359]: I0514 18:05:52.721854 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:05:52.727653 kubelet[2359]: I0514 18:05:52.727585 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:05:52.727653 kubelet[2359]: I0514 18:05:52.727646 2359 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:05:52.727852 kubelet[2359]: I0514 18:05:52.727679 2359 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:05:52.728918 kubelet[2359]: E0514 18:05:52.728804 2359 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:05:52.733150 kubelet[2359]: W0514 18:05:52.733035 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.128.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:52.733315 kubelet[2359]: E0514 18:05:52.733170 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.128.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:52.739860 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:05:52.765633 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:05:52.768273 kubelet[2359]: E0514 18:05:52.768196 2359 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-4c74b6421c\" not found" May 14 18:05:52.773761 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:05:52.784668 kubelet[2359]: I0514 18:05:52.784600 2359 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:05:52.784971 kubelet[2359]: I0514 18:05:52.784941 2359 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:05:52.785057 kubelet[2359]: I0514 18:05:52.785007 2359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:05:52.785713 kubelet[2359]: I0514 18:05:52.785678 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:05:52.789883 kubelet[2359]: E0514 18:05:52.789831 2359 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334.0.0-a-4c74b6421c\" not found" May 14 18:05:52.843613 systemd[1]: Created slice kubepods-burstable-podfefa317f505e9b09fd19397a3558ca88.slice - libcontainer container kubepods-burstable-podfefa317f505e9b09fd19397a3558ca88.slice. May 14 18:05:52.857179 systemd[1]: Created slice kubepods-burstable-pod4ab069fef5062c30a53eb78732fcf0bb.slice - libcontainer container kubepods-burstable-pod4ab069fef5062c30a53eb78732fcf0bb.slice. May 14 18:05:52.866429 kubelet[2359]: I0514 18:05:52.866288 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866647 kubelet[2359]: I0514 18:05:52.866457 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fefa317f505e9b09fd19397a3558ca88-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-4c74b6421c\" (UID: \"fefa317f505e9b09fd19397a3558ca88\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866647 kubelet[2359]: I0514 18:05:52.866491 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866647 kubelet[2359]: I0514 18:05:52.866548 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866647 kubelet[2359]: I0514 18:05:52.866576 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866647 kubelet[2359]: I0514 18:05:52.866628 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866879 kubelet[2359]: I0514 18:05:52.866651 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866879 kubelet[2359]: I0514 18:05:52.866707 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.866879 kubelet[2359]: I0514 18:05:52.866729 2359 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.869907 kubelet[2359]: E0514 18:05:52.869810 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.128.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-4c74b6421c?timeout=10s\": dial tcp 165.232.128.115:6443: connect: connection refused" interval="400ms" May 14 18:05:52.877624 systemd[1]: Created slice kubepods-burstable-podccead9d3e1b879f5ea534788f6c0ff68.slice - libcontainer container kubepods-burstable-podccead9d3e1b879f5ea534788f6c0ff68.slice. May 14 18:05:52.886767 kubelet[2359]: I0514 18:05:52.886679 2359 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:52.887228 kubelet[2359]: E0514 18:05:52.887193 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.128.115:6443/api/v1/nodes\": dial tcp 165.232.128.115:6443: connect: connection refused" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:53.088718 kubelet[2359]: I0514 18:05:53.088642 2359 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:53.089508 kubelet[2359]: E0514 18:05:53.089452 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.128.115:6443/api/v1/nodes\": dial tcp 165.232.128.115:6443: connect: connection refused" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:53.155079 kubelet[2359]: E0514 18:05:53.154895 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.156532 containerd[1545]: time="2025-05-14T18:05:53.156238574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-4c74b6421c,Uid:fefa317f505e9b09fd19397a3558ca88,Namespace:kube-system,Attempt:0,}" May 14 18:05:53.174950 kubelet[2359]: E0514 18:05:53.174868 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.176260 containerd[1545]: time="2025-05-14T18:05:53.175898943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-4c74b6421c,Uid:4ab069fef5062c30a53eb78732fcf0bb,Namespace:kube-system,Attempt:0,}" May 14 18:05:53.182197 kubelet[2359]: E0514 18:05:53.182151 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.199965 containerd[1545]: time="2025-05-14T18:05:53.199751320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-4c74b6421c,Uid:ccead9d3e1b879f5ea534788f6c0ff68,Namespace:kube-system,Attempt:0,}" May 14 18:05:53.270680 kubelet[2359]: E0514 18:05:53.270610 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.128.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-4c74b6421c?timeout=10s\": dial tcp 165.232.128.115:6443: connect: connection refused" interval="800ms" May 14 18:05:53.292526 containerd[1545]: time="2025-05-14T18:05:53.292450708Z" level=info msg="connecting to shim bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6" address="unix:///run/containerd/s/425bce99ffd3cddcc6734f78dfdf7ef1c36748c0985febb7df2700fec89909b4" namespace=k8s.io protocol=ttrpc version=3 May 14 18:05:53.307651 containerd[1545]: time="2025-05-14T18:05:53.307583384Z" level=info msg="connecting to shim 8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217" address="unix:///run/containerd/s/28f14acaf8dbd9309ac40973421165b485dac307e1453fd69d1329fdfc8938d4" namespace=k8s.io protocol=ttrpc version=3 May 14 18:05:53.309072 containerd[1545]: time="2025-05-14T18:05:53.308669715Z" level=info msg="connecting to shim 6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11" address="unix:///run/containerd/s/24dc4e6757bed4199619c601781d5e7ea31491a0905f4e7b7aa8a820c0f5d737" namespace=k8s.io protocol=ttrpc version=3 May 14 18:05:53.468490 systemd[1]: Started cri-containerd-8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217.scope - libcontainer container 8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217. May 14 18:05:53.471583 systemd[1]: Started cri-containerd-bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6.scope - libcontainer container bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6. May 14 18:05:53.482020 systemd[1]: Started cri-containerd-6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11.scope - libcontainer container 6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11. May 14 18:05:53.494303 kubelet[2359]: I0514 18:05:53.493808 2359 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:53.495389 kubelet[2359]: E0514 18:05:53.495194 2359 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.128.115:6443/api/v1/nodes\": dial tcp 165.232.128.115:6443: connect: connection refused" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:53.575064 kubelet[2359]: W0514 18:05:53.575012 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.128.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:53.575271 kubelet[2359]: E0514 18:05:53.575077 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.128.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:53.620169 containerd[1545]: time="2025-05-14T18:05:53.618198768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334.0.0-a-4c74b6421c,Uid:ccead9d3e1b879f5ea534788f6c0ff68,Namespace:kube-system,Attempt:0,} returns sandbox id \"8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217\"" May 14 18:05:53.625682 kubelet[2359]: E0514 18:05:53.624914 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.641327 containerd[1545]: time="2025-05-14T18:05:53.641221001Z" level=info msg="CreateContainer within sandbox \"8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:05:53.656197 containerd[1545]: time="2025-05-14T18:05:53.655671215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334.0.0-a-4c74b6421c,Uid:4ab069fef5062c30a53eb78732fcf0bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11\"" May 14 18:05:53.658886 kubelet[2359]: E0514 18:05:53.657603 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.670818 containerd[1545]: time="2025-05-14T18:05:53.670760387Z" level=info msg="CreateContainer within sandbox \"6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:05:53.672222 containerd[1545]: time="2025-05-14T18:05:53.672162303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334.0.0-a-4c74b6421c,Uid:fefa317f505e9b09fd19397a3558ca88,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6\"" May 14 18:05:53.674314 kubelet[2359]: E0514 18:05:53.674279 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:53.676416 containerd[1545]: time="2025-05-14T18:05:53.676355167Z" level=info msg="Container ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821: CDI devices from CRI Config.CDIDevices: []" May 14 18:05:53.680906 containerd[1545]: time="2025-05-14T18:05:53.680715702Z" level=info msg="CreateContainer within sandbox \"bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:05:53.689711 containerd[1545]: time="2025-05-14T18:05:53.689633888Z" level=info msg="CreateContainer within sandbox \"8175ad21d5a4fb4fd485f7ed8d890d686228e406dcf16bd4194f70b7710bc217\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821\"" May 14 18:05:53.690850 containerd[1545]: time="2025-05-14T18:05:53.690812996Z" level=info msg="StartContainer for \"ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821\"" May 14 18:05:53.700917 containerd[1545]: time="2025-05-14T18:05:53.700861866Z" level=info msg="Container 9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024: CDI devices from CRI Config.CDIDevices: []" May 14 18:05:53.704221 containerd[1545]: time="2025-05-14T18:05:53.704147481Z" level=info msg="connecting to shim ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821" address="unix:///run/containerd/s/28f14acaf8dbd9309ac40973421165b485dac307e1453fd69d1329fdfc8938d4" protocol=ttrpc version=3 May 14 18:05:53.708027 containerd[1545]: time="2025-05-14T18:05:53.707907888Z" level=info msg="Container f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f: CDI devices from CRI Config.CDIDevices: []" May 14 18:05:53.712619 containerd[1545]: time="2025-05-14T18:05:53.712567850Z" level=info msg="CreateContainer within sandbox \"6cfae898025c227abe54eb86961aac805294323e6132b83d3936d01712b8bf11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024\"" May 14 18:05:53.713602 containerd[1545]: time="2025-05-14T18:05:53.713561330Z" level=info msg="StartContainer for \"9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024\"" May 14 18:05:53.718452 containerd[1545]: time="2025-05-14T18:05:53.718297808Z" level=info msg="CreateContainer within sandbox \"bbd5eba0b50a6d4635eac5f4c0bddb8accdcb573601dde80df13e011651165f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f\"" May 14 18:05:53.719436 containerd[1545]: time="2025-05-14T18:05:53.719320003Z" level=info msg="StartContainer for \"f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f\"" May 14 18:05:53.722012 containerd[1545]: time="2025-05-14T18:05:53.719667548Z" level=info msg="connecting to shim 9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024" address="unix:///run/containerd/s/24dc4e6757bed4199619c601781d5e7ea31491a0905f4e7b7aa8a820c0f5d737" protocol=ttrpc version=3 May 14 18:05:53.723070 containerd[1545]: time="2025-05-14T18:05:53.722966400Z" level=info msg="connecting to shim f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f" address="unix:///run/containerd/s/425bce99ffd3cddcc6734f78dfdf7ef1c36748c0985febb7df2700fec89909b4" protocol=ttrpc version=3 May 14 18:05:53.759287 systemd[1]: Started cri-containerd-ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821.scope - libcontainer container ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821. May 14 18:05:53.786575 systemd[1]: Started cri-containerd-f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f.scope - libcontainer container f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f. May 14 18:05:53.797522 systemd[1]: Started cri-containerd-9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024.scope - libcontainer container 9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024. May 14 18:05:53.868204 kubelet[2359]: W0514 18:05:53.868006 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.128.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:53.868204 kubelet[2359]: E0514 18:05:53.868116 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.128.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:53.914646 containerd[1545]: time="2025-05-14T18:05:53.914577416Z" level=info msg="StartContainer for \"f0b9aba12cbf332b5becf8efb229421d43678b1474d02588292eb02dcf347f5f\" returns successfully" May 14 18:05:53.936762 containerd[1545]: time="2025-05-14T18:05:53.936706278Z" level=info msg="StartContainer for \"ec13c1bd7095f769d81ce9dac7d17fbce0c8c34434c0af77548800fa0d938821\" returns successfully" May 14 18:05:53.964845 containerd[1545]: time="2025-05-14T18:05:53.964768742Z" level=info msg="StartContainer for \"9753618a7a73aa7a8e7cf1272df281c28d294de9366508ae2530008ec7c3d024\" returns successfully" May 14 18:05:54.075114 kubelet[2359]: E0514 18:05:54.073220 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.128.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334.0.0-a-4c74b6421c?timeout=10s\": dial tcp 165.232.128.115:6443: connect: connection refused" interval="1.6s" May 14 18:05:54.079040 kubelet[2359]: W0514 18:05:54.077707 2359 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.128.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-4c74b6421c&limit=500&resourceVersion=0": dial tcp 165.232.128.115:6443: connect: connection refused May 14 18:05:54.079378 kubelet[2359]: E0514 18:05:54.079310 2359 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.128.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334.0.0-a-4c74b6421c&limit=500&resourceVersion=0\": dial tcp 165.232.128.115:6443: connect: connection refused" logger="UnhandledError" May 14 18:05:54.298023 kubelet[2359]: I0514 18:05:54.297826 2359 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:54.783790 kubelet[2359]: E0514 18:05:54.783612 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:54.793371 kubelet[2359]: E0514 18:05:54.793159 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:54.798958 kubelet[2359]: E0514 18:05:54.798906 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:55.806008 kubelet[2359]: E0514 18:05:55.804429 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:55.810011 kubelet[2359]: E0514 18:05:55.807499 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:55.816123 kubelet[2359]: E0514 18:05:55.809893 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:56.324384 kubelet[2359]: E0514 18:05:56.324332 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4334.0.0-a-4c74b6421c\" not found" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:56.405014 kubelet[2359]: I0514 18:05:56.403845 2359 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:05:56.650646 kubelet[2359]: I0514 18:05:56.650443 2359 apiserver.go:52] "Watching apiserver" May 14 18:05:56.663857 kubelet[2359]: I0514 18:05:56.663759 2359 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:05:56.810674 kubelet[2359]: E0514 18:05:56.810393 2359 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:05:56.811328 kubelet[2359]: E0514 18:05:56.811212 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:58.456642 kubelet[2359]: W0514 18:05:58.456236 2359 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:05:58.456642 kubelet[2359]: E0514 18:05:58.456553 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:58.639053 systemd[1]: Reload requested from client PID 2630 ('systemctl') (unit session-9.scope)... May 14 18:05:58.639947 systemd[1]: Reloading... May 14 18:05:58.807778 kubelet[2359]: E0514 18:05:58.806124 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:05:58.811012 zram_generator::config[2669]: No configuration found. May 14 18:05:59.042190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:05:59.281400 systemd[1]: Reloading finished in 640 ms. May 14 18:05:59.322677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:59.340880 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:05:59.341225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:59.341301 systemd[1]: kubelet.service: Consumed 1.065s CPU time, 111.2M memory peak. May 14 18:05:59.348096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:05:59.538809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:05:59.551731 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:05:59.615891 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:05:59.615891 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:05:59.615891 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:05:59.616668 kubelet[2724]: I0514 18:05:59.615896 2724 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:05:59.633459 kubelet[2724]: I0514 18:05:59.633402 2724 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:05:59.633459 kubelet[2724]: I0514 18:05:59.633441 2724 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:05:59.634126 kubelet[2724]: I0514 18:05:59.633773 2724 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:05:59.642413 kubelet[2724]: I0514 18:05:59.642346 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:05:59.648714 kubelet[2724]: I0514 18:05:59.648178 2724 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:05:59.657890 kubelet[2724]: I0514 18:05:59.657820 2724 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:05:59.665133 kubelet[2724]: I0514 18:05:59.665078 2724 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:05:59.665335 kubelet[2724]: I0514 18:05:59.665280 2724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:05:59.665611 kubelet[2724]: I0514 18:05:59.665545 2724 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:05:59.665892 kubelet[2724]: I0514 18:05:59.665593 2724 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334.0.0-a-4c74b6421c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:05:59.666085 kubelet[2724]: I0514 18:05:59.665896 2724 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:05:59.666085 kubelet[2724]: I0514 18:05:59.665914 2724 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:05:59.666085 kubelet[2724]: I0514 18:05:59.666012 2724 state_mem.go:36] "Initialized new in-memory state store" May 14 18:05:59.667145 kubelet[2724]: I0514 18:05:59.666226 2724 kubelet.go:408] "Attempting to sync node with API server" May 14 18:05:59.667145 kubelet[2724]: I0514 18:05:59.666432 2724 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:05:59.667145 kubelet[2724]: I0514 18:05:59.666586 2724 kubelet.go:314] "Adding apiserver pod source" May 14 18:05:59.667145 kubelet[2724]: I0514 18:05:59.666611 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:05:59.672390 kubelet[2724]: I0514 18:05:59.671657 2724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:05:59.679720 kubelet[2724]: I0514 18:05:59.679058 2724 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:05:59.680026 kubelet[2724]: I0514 18:05:59.679997 2724 server.go:1269] "Started kubelet" May 14 18:05:59.688828 kubelet[2724]: I0514 18:05:59.688768 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:05:59.694028 kubelet[2724]: I0514 18:05:59.693865 2724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:05:59.705498 kubelet[2724]: I0514 18:05:59.704304 2724 server.go:460] "Adding debug handlers to kubelet server" May 14 18:05:59.711829 kubelet[2724]: I0514 18:05:59.711701 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:05:59.712146 kubelet[2724]: I0514 18:05:59.712120 2724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:05:59.712508 kubelet[2724]: I0514 18:05:59.712478 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:05:59.722267 kubelet[2724]: I0514 18:05:59.722218 2724 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:05:59.722480 kubelet[2724]: E0514 18:05:59.722399 2724 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334.0.0-a-4c74b6421c\" not found" May 14 18:05:59.723785 kubelet[2724]: I0514 18:05:59.723744 2724 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:05:59.723993 kubelet[2724]: I0514 18:05:59.723963 2724 reconciler.go:26] "Reconciler: start to sync state" May 14 18:05:59.744945 kubelet[2724]: I0514 18:05:59.744898 2724 factory.go:221] Registration of the systemd container factory successfully May 14 18:05:59.747439 kubelet[2724]: I0514 18:05:59.747391 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:05:59.758342 kubelet[2724]: I0514 18:05:59.757813 2724 factory.go:221] Registration of the containerd container factory successfully May 14 18:05:59.784429 kubelet[2724]: E0514 18:05:59.784392 2724 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:05:59.795805 kubelet[2724]: I0514 18:05:59.794374 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:05:59.805448 kubelet[2724]: I0514 18:05:59.805312 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:05:59.809582 kubelet[2724]: I0514 18:05:59.808082 2724 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:05:59.809582 kubelet[2724]: I0514 18:05:59.808142 2724 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:05:59.809582 kubelet[2724]: E0514 18:05:59.808219 2724 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900335 2724 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900359 2724 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900386 2724 state_mem.go:36] "Initialized new in-memory state store" May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900671 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900683 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:05:59.901016 kubelet[2724]: I0514 18:05:59.900704 2724 policy_none.go:49] "None policy: Start" May 14 18:05:59.905849 kubelet[2724]: I0514 18:05:59.905820 2724 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:05:59.906097 kubelet[2724]: I0514 18:05:59.906086 2724 state_mem.go:35] "Initializing new in-memory state store" May 14 18:05:59.906396 kubelet[2724]: I0514 18:05:59.906383 2724 state_mem.go:75] "Updated machine memory state" May 14 18:05:59.908386 kubelet[2724]: E0514 18:05:59.908356 2724 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:05:59.915167 kubelet[2724]: I0514 18:05:59.914371 2724 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:05:59.915167 kubelet[2724]: I0514 18:05:59.914565 2724 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:05:59.915167 kubelet[2724]: I0514 18:05:59.914577 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:05:59.915167 kubelet[2724]: I0514 18:05:59.914931 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:06:00.026803 kubelet[2724]: I0514 18:06:00.026754 2724 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.044513 kubelet[2724]: I0514 18:06:00.044465 2724 kubelet_node_status.go:111] "Node was previously registered" node="ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.044724 kubelet[2724]: I0514 18:06:00.044576 2724 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.122599 kubelet[2724]: W0514 18:06:00.121106 2724 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:06:00.122599 kubelet[2724]: E0514 18:06:00.121222 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.124876 kubelet[2724]: W0514 18:06:00.124814 2724 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:06:00.125150 kubelet[2724]: W0514 18:06:00.125110 2724 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:06:00.127172 kubelet[2724]: I0514 18:06:00.127124 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-ca-certs\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.127172 kubelet[2724]: I0514 18:06:00.127164 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-k8s-certs\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.127172 kubelet[2724]: I0514 18:06:00.127196 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccead9d3e1b879f5ea534788f6c0ff68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" (UID: \"ccead9d3e1b879f5ea534788f6c0ff68\") " pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.127172 kubelet[2724]: I0514 18:06:00.127216 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-k8s-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.128079 kubelet[2724]: I0514 18:06:00.127245 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.128079 kubelet[2724]: I0514 18:06:00.127262 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fefa317f505e9b09fd19397a3558ca88-kubeconfig\") pod \"kube-scheduler-ci-4334.0.0-a-4c74b6421c\" (UID: \"fefa317f505e9b09fd19397a3558ca88\") " pod="kube-system/kube-scheduler-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.128079 kubelet[2724]: I0514 18:06:00.127294 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-ca-certs\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.128079 kubelet[2724]: I0514 18:06:00.127311 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-flexvolume-dir\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.128079 kubelet[2724]: I0514 18:06:00.127329 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ab069fef5062c30a53eb78732fcf0bb-kubeconfig\") pod \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" (UID: \"4ab069fef5062c30a53eb78732fcf0bb\") " pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.422414 kubelet[2724]: E0514 18:06:00.421983 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.425385 kubelet[2724]: E0514 18:06:00.425337 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.425854 kubelet[2724]: E0514 18:06:00.425808 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.671008 kubelet[2724]: I0514 18:06:00.669533 2724 apiserver.go:52] "Watching apiserver" May 14 18:06:00.724908 kubelet[2724]: I0514 18:06:00.724728 2724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:06:00.860999 kubelet[2724]: E0514 18:06:00.860904 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.895695 kubelet[2724]: W0514 18:06:00.895653 2724 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:06:00.895930 kubelet[2724]: E0514 18:06:00.895746 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4334.0.0-a-4c74b6421c\" already exists" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.896114 kubelet[2724]: E0514 18:06:00.896034 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.936162 kubelet[2724]: W0514 18:06:00.936047 2724 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 18:06:00.936523 kubelet[2724]: E0514 18:06:00.936405 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334.0.0-a-4c74b6421c\" already exists" pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" May 14 18:06:00.936931 kubelet[2724]: E0514 18:06:00.936874 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:00.980775 kubelet[2724]: I0514 18:06:00.980523 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334.0.0-a-4c74b6421c" podStartSLOduration=2.980478044 podStartE2EDuration="2.980478044s" podCreationTimestamp="2025-05-14 18:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:00.946361004 +0000 UTC m=+1.388799962" watchObservedRunningTime="2025-05-14 18:06:00.980478044 +0000 UTC m=+1.422917009" May 14 18:06:01.010995 kubelet[2724]: I0514 18:06:01.010901 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334.0.0-a-4c74b6421c" podStartSLOduration=1.010873851 podStartE2EDuration="1.010873851s" podCreationTimestamp="2025-05-14 18:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:00.984270896 +0000 UTC m=+1.426709967" watchObservedRunningTime="2025-05-14 18:06:01.010873851 +0000 UTC m=+1.453312812" May 14 18:06:01.043344 kubelet[2724]: I0514 18:06:01.043259 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334.0.0-a-4c74b6421c" podStartSLOduration=1.043236659 podStartE2EDuration="1.043236659s" podCreationTimestamp="2025-05-14 18:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:01.011828012 +0000 UTC m=+1.454266971" watchObservedRunningTime="2025-05-14 18:06:01.043236659 +0000 UTC m=+1.485675619" May 14 18:06:01.863709 kubelet[2724]: E0514 18:06:01.863637 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:01.867849 kubelet[2724]: E0514 18:06:01.867146 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:03.086328 kubelet[2724]: I0514 18:06:03.086283 2724 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:06:03.089206 containerd[1545]: time="2025-05-14T18:06:03.088788021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:06:03.090336 kubelet[2724]: I0514 18:06:03.090300 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:06:03.858629 kubelet[2724]: I0514 18:06:03.858572 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34459101-0928-4b54-9399-456860d0d74d-xtables-lock\") pod \"kube-proxy-prntc\" (UID: \"34459101-0928-4b54-9399-456860d0d74d\") " pod="kube-system/kube-proxy-prntc" May 14 18:06:03.858824 kubelet[2724]: I0514 18:06:03.858639 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34459101-0928-4b54-9399-456860d0d74d-kube-proxy\") pod \"kube-proxy-prntc\" (UID: \"34459101-0928-4b54-9399-456860d0d74d\") " pod="kube-system/kube-proxy-prntc" May 14 18:06:03.858824 kubelet[2724]: I0514 18:06:03.858671 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34459101-0928-4b54-9399-456860d0d74d-lib-modules\") pod \"kube-proxy-prntc\" (UID: \"34459101-0928-4b54-9399-456860d0d74d\") " pod="kube-system/kube-proxy-prntc" May 14 18:06:03.858824 kubelet[2724]: I0514 18:06:03.858700 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzdj\" (UniqueName: \"kubernetes.io/projected/34459101-0928-4b54-9399-456860d0d74d-kube-api-access-wlzdj\") pod \"kube-proxy-prntc\" (UID: \"34459101-0928-4b54-9399-456860d0d74d\") " pod="kube-system/kube-proxy-prntc" May 14 18:06:03.863549 systemd[1]: Created slice kubepods-besteffort-pod34459101_0928_4b54_9399_456860d0d74d.slice - libcontainer container kubepods-besteffort-pod34459101_0928_4b54_9399_456860d0d74d.slice. May 14 18:06:04.130808 systemd[1]: Created slice kubepods-besteffort-pod615e4f41_6d61_4a0a_b3b8_18aa917fa68f.slice - libcontainer container kubepods-besteffort-pod615e4f41_6d61_4a0a_b3b8_18aa917fa68f.slice. May 14 18:06:04.162008 kubelet[2724]: I0514 18:06:04.161823 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp55h\" (UniqueName: \"kubernetes.io/projected/615e4f41-6d61-4a0a-b3b8-18aa917fa68f-kube-api-access-qp55h\") pod \"tigera-operator-6f6897fdc5-6zjkr\" (UID: \"615e4f41-6d61-4a0a-b3b8-18aa917fa68f\") " pod="tigera-operator/tigera-operator-6f6897fdc5-6zjkr" May 14 18:06:04.162008 kubelet[2724]: I0514 18:06:04.161902 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/615e4f41-6d61-4a0a-b3b8-18aa917fa68f-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-6zjkr\" (UID: \"615e4f41-6d61-4a0a-b3b8-18aa917fa68f\") " pod="tigera-operator/tigera-operator-6f6897fdc5-6zjkr" May 14 18:06:04.177018 kubelet[2724]: E0514 18:06:04.174353 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:04.177626 containerd[1545]: time="2025-05-14T18:06:04.177549072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prntc,Uid:34459101-0928-4b54-9399-456860d0d74d,Namespace:kube-system,Attempt:0,}" May 14 18:06:04.212017 containerd[1545]: time="2025-05-14T18:06:04.209603070Z" level=info msg="connecting to shim d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03" address="unix:///run/containerd/s/5efaac3b894cbe253587502512ccd43cc2dc4d2b7a794cd9a8473dbaface052c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:04.279370 systemd[1]: Started cri-containerd-d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03.scope - libcontainer container d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03. May 14 18:06:04.445992 containerd[1545]: time="2025-05-14T18:06:04.445903021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-6zjkr,Uid:615e4f41-6d61-4a0a-b3b8-18aa917fa68f,Namespace:tigera-operator,Attempt:0,}" May 14 18:06:04.479658 containerd[1545]: time="2025-05-14T18:06:04.479498777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prntc,Uid:34459101-0928-4b54-9399-456860d0d74d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03\"" May 14 18:06:04.482002 kubelet[2724]: E0514 18:06:04.481806 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:04.493501 containerd[1545]: time="2025-05-14T18:06:04.493440418Z" level=info msg="CreateContainer within sandbox \"d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:06:04.523384 containerd[1545]: time="2025-05-14T18:06:04.523275441Z" level=info msg="connecting to shim 240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda" address="unix:///run/containerd/s/bb39508b3364a195576efed98b7c9686a3b3abcc1ccb981b774d16bffc698b04" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:04.525361 containerd[1545]: time="2025-05-14T18:06:04.525304427Z" level=info msg="Container 70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:04.561778 containerd[1545]: time="2025-05-14T18:06:04.559637707Z" level=info msg="CreateContainer within sandbox \"d5fa7a888f5fd8bba9c323d09b42c80155de78ff45bcd1fa18338a40e5c2ec03\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e\"" May 14 18:06:04.568447 containerd[1545]: time="2025-05-14T18:06:04.568149112Z" level=info msg="StartContainer for \"70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e\"" May 14 18:06:04.574319 containerd[1545]: time="2025-05-14T18:06:04.574263364Z" level=info msg="connecting to shim 70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e" address="unix:///run/containerd/s/5efaac3b894cbe253587502512ccd43cc2dc4d2b7a794cd9a8473dbaface052c" protocol=ttrpc version=3 May 14 18:06:04.600371 systemd[1]: Started cri-containerd-240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda.scope - libcontainer container 240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda. May 14 18:06:04.637270 systemd[1]: Started cri-containerd-70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e.scope - libcontainer container 70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e. May 14 18:06:04.717236 containerd[1545]: time="2025-05-14T18:06:04.716706642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-6zjkr,Uid:615e4f41-6d61-4a0a-b3b8-18aa917fa68f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda\"" May 14 18:06:04.731048 containerd[1545]: time="2025-05-14T18:06:04.729990546Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:06:04.744018 systemd-resolved[1401]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 14 18:06:04.772438 containerd[1545]: time="2025-05-14T18:06:04.771067354Z" level=info msg="StartContainer for \"70f1119fd5462dbe424cdbe6e9a718aafadcd7bcf561c363e3d66199a3fb0d6e\" returns successfully" May 14 18:06:04.806094 sudo[1792]: pam_unix(sudo:session): session closed for user root May 14 18:06:04.810199 sshd[1791]: Connection closed by 139.178.89.65 port 55112 May 14 18:06:04.811125 sshd-session[1789]: pam_unix(sshd:session): session closed for user core May 14 18:06:04.819129 systemd[1]: sshd@8-165.232.128.115:22-139.178.89.65:55112.service: Deactivated successfully. May 14 18:06:04.819292 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. May 14 18:06:04.823556 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:06:04.824281 systemd[1]: session-9.scope: Consumed 5.697s CPU time, 163.9M memory peak. May 14 18:06:04.828080 systemd-logind[1517]: Removed session 9. May 14 18:06:04.893752 kubelet[2724]: E0514 18:06:04.893679 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:06.264119 kubelet[2724]: E0514 18:06:06.263797 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:06.289435 kubelet[2724]: I0514 18:06:06.289303 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prntc" podStartSLOduration=3.289274812 podStartE2EDuration="3.289274812s" podCreationTimestamp="2025-05-14 18:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:04.911034291 +0000 UTC m=+5.353473291" watchObservedRunningTime="2025-05-14 18:06:06.289274812 +0000 UTC m=+6.731713771" May 14 18:06:06.450785 update_engine[1521]: I20250514 18:06:06.450023 1521 update_attempter.cc:509] Updating boot flags... May 14 18:06:06.810634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268615950.mount: Deactivated successfully. May 14 18:06:06.898110 kubelet[2724]: E0514 18:06:06.898048 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:07.538618 containerd[1545]: time="2025-05-14T18:06:07.538105723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:07.540027 containerd[1545]: time="2025-05-14T18:06:07.539358383Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 18:06:07.540785 containerd[1545]: time="2025-05-14T18:06:07.540507243Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:07.544242 containerd[1545]: time="2025-05-14T18:06:07.544180906Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:07.545386 containerd[1545]: time="2025-05-14T18:06:07.545171342Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.815132624s" May 14 18:06:07.545386 containerd[1545]: time="2025-05-14T18:06:07.545232702Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 18:06:07.551622 containerd[1545]: time="2025-05-14T18:06:07.551556192Z" level=info msg="CreateContainer within sandbox \"240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:06:07.563007 containerd[1545]: time="2025-05-14T18:06:07.562922270Z" level=info msg="Container 4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:07.570625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805196258.mount: Deactivated successfully. May 14 18:06:07.577960 containerd[1545]: time="2025-05-14T18:06:07.577911587Z" level=info msg="CreateContainer within sandbox \"240c5652814beae99e63d67ac5786cc6660b9b4974e06d24121e4c40f179ebda\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e\"" May 14 18:06:07.580212 containerd[1545]: time="2025-05-14T18:06:07.580151202Z" level=info msg="StartContainer for \"4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e\"" May 14 18:06:07.582740 containerd[1545]: time="2025-05-14T18:06:07.582642591Z" level=info msg="connecting to shim 4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e" address="unix:///run/containerd/s/bb39508b3364a195576efed98b7c9686a3b3abcc1ccb981b774d16bffc698b04" protocol=ttrpc version=3 May 14 18:06:07.623433 systemd[1]: Started cri-containerd-4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e.scope - libcontainer container 4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e. May 14 18:06:07.678668 containerd[1545]: time="2025-05-14T18:06:07.678561123Z" level=info msg="StartContainer for \"4ff181c597cd7a353f10a86c872bf8c006ec1c5a6a93ebceff89fa77ed2f666e\" returns successfully" May 14 18:06:08.686987 kubelet[2724]: E0514 18:06:08.686831 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:08.710955 kubelet[2724]: I0514 18:06:08.710863 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-6zjkr" podStartSLOduration=2.891814233 podStartE2EDuration="5.710838775s" podCreationTimestamp="2025-05-14 18:06:03 +0000 UTC" firstStartedPulling="2025-05-14 18:06:04.728334671 +0000 UTC m=+5.170773610" lastFinishedPulling="2025-05-14 18:06:07.547359215 +0000 UTC m=+7.989798152" observedRunningTime="2025-05-14 18:06:07.922829167 +0000 UTC m=+8.365268149" watchObservedRunningTime="2025-05-14 18:06:08.710838775 +0000 UTC m=+9.153277724" May 14 18:06:08.906902 kubelet[2724]: E0514 18:06:08.906855 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:09.436779 kubelet[2724]: E0514 18:06:09.436682 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:09.911779 kubelet[2724]: E0514 18:06:09.911638 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:10.928432 systemd[1]: Created slice kubepods-besteffort-podd8317ebc_629d_4f14_93e0_a140724cee5d.slice - libcontainer container kubepods-besteffort-podd8317ebc_629d_4f14_93e0_a140724cee5d.slice. May 14 18:06:11.016292 kubelet[2724]: I0514 18:06:11.016214 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d8317ebc-629d-4f14-93e0-a140724cee5d-typha-certs\") pod \"calico-typha-6bf9666477-r646v\" (UID: \"d8317ebc-629d-4f14-93e0-a140724cee5d\") " pod="calico-system/calico-typha-6bf9666477-r646v" May 14 18:06:11.017057 kubelet[2724]: I0514 18:06:11.016365 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8317ebc-629d-4f14-93e0-a140724cee5d-tigera-ca-bundle\") pod \"calico-typha-6bf9666477-r646v\" (UID: \"d8317ebc-629d-4f14-93e0-a140724cee5d\") " pod="calico-system/calico-typha-6bf9666477-r646v" May 14 18:06:11.017057 kubelet[2724]: I0514 18:06:11.016409 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr5g\" (UniqueName: \"kubernetes.io/projected/d8317ebc-629d-4f14-93e0-a140724cee5d-kube-api-access-4sr5g\") pod \"calico-typha-6bf9666477-r646v\" (UID: \"d8317ebc-629d-4f14-93e0-a140724cee5d\") " pod="calico-system/calico-typha-6bf9666477-r646v" May 14 18:06:11.087441 systemd[1]: Created slice kubepods-besteffort-pod217d77fa_0460_4fdd_b359_c36dd4ff1be7.slice - libcontainer container kubepods-besteffort-pod217d77fa_0460_4fdd_b359_c36dd4ff1be7.slice. May 14 18:06:11.116743 kubelet[2724]: I0514 18:06:11.116675 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-flexvol-driver-host\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.116952 kubelet[2724]: I0514 18:06:11.116744 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-cni-log-dir\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.116952 kubelet[2724]: I0514 18:06:11.116807 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-var-lib-calico\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.116952 kubelet[2724]: I0514 18:06:11.116827 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-cni-bin-dir\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.116952 kubelet[2724]: I0514 18:06:11.116854 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-policysync\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.116952 kubelet[2724]: I0514 18:06:11.116880 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-lib-modules\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117253 kubelet[2724]: I0514 18:06:11.116902 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/217d77fa-0460-4fdd-b359-c36dd4ff1be7-node-certs\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117253 kubelet[2724]: I0514 18:06:11.116921 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-var-run-calico\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117253 kubelet[2724]: I0514 18:06:11.116942 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-xtables-lock\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117253 kubelet[2724]: I0514 18:06:11.116964 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217d77fa-0460-4fdd-b359-c36dd4ff1be7-tigera-ca-bundle\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117253 kubelet[2724]: I0514 18:06:11.117008 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/217d77fa-0460-4fdd-b359-c36dd4ff1be7-cni-net-dir\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.117438 kubelet[2724]: I0514 18:06:11.117036 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-679v4\" (UniqueName: \"kubernetes.io/projected/217d77fa-0460-4fdd-b359-c36dd4ff1be7-kube-api-access-679v4\") pod \"calico-node-t5qgr\" (UID: \"217d77fa-0460-4fdd-b359-c36dd4ff1be7\") " pod="calico-system/calico-node-t5qgr" May 14 18:06:11.229445 kubelet[2724]: E0514 18:06:11.229242 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.229445 kubelet[2724]: W0514 18:06:11.229273 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.229445 kubelet[2724]: E0514 18:06:11.229303 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.238010 kubelet[2724]: E0514 18:06:11.237705 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:11.238933 containerd[1545]: time="2025-05-14T18:06:11.238456216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf9666477-r646v,Uid:d8317ebc-629d-4f14-93e0-a140724cee5d,Namespace:calico-system,Attempt:0,}" May 14 18:06:11.278641 kubelet[2724]: E0514 18:06:11.278560 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.278940 kubelet[2724]: W0514 18:06:11.278800 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.279644 kubelet[2724]: E0514 18:06:11.279423 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.292495 containerd[1545]: time="2025-05-14T18:06:11.292424061Z" level=info msg="connecting to shim 0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d" address="unix:///run/containerd/s/17f7f5d5b4f05f8b41ea82360ed690e179fccd4a3a40fd403bd3cd25fc943921" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:11.316244 kubelet[2724]: E0514 18:06:11.315398 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:11.350316 systemd[1]: Started cri-containerd-0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d.scope - libcontainer container 0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d. May 14 18:06:11.395630 kubelet[2724]: E0514 18:06:11.395567 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:11.397584 containerd[1545]: time="2025-05-14T18:06:11.397276903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5qgr,Uid:217d77fa-0460-4fdd-b359-c36dd4ff1be7,Namespace:calico-system,Attempt:0,}" May 14 18:06:11.406080 kubelet[2724]: E0514 18:06:11.406028 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.406424 kubelet[2724]: W0514 18:06:11.406069 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.406424 kubelet[2724]: E0514 18:06:11.406127 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.406698 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412299 kubelet[2724]: W0514 18:06:11.406719 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.406770 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.408425 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412299 kubelet[2724]: W0514 18:06:11.408473 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.408504 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.408957 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412299 kubelet[2724]: W0514 18:06:11.409006 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.409029 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412299 kubelet[2724]: E0514 18:06:11.409327 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412596 kubelet[2724]: W0514 18:06:11.409342 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.409383 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.409656 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412596 kubelet[2724]: W0514 18:06:11.409669 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.409697 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.409911 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412596 kubelet[2724]: W0514 18:06:11.409921 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.409933 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412596 kubelet[2724]: E0514 18:06:11.410493 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412596 kubelet[2724]: W0514 18:06:11.410506 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412886 kubelet[2724]: E0514 18:06:11.410521 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412886 kubelet[2724]: E0514 18:06:11.412359 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412886 kubelet[2724]: W0514 18:06:11.412396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412886 kubelet[2724]: E0514 18:06:11.412417 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.412886 kubelet[2724]: E0514 18:06:11.412661 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.412886 kubelet[2724]: W0514 18:06:11.412673 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.412886 kubelet[2724]: E0514 18:06:11.412687 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.413164 kubelet[2724]: E0514 18:06:11.412898 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.413164 kubelet[2724]: W0514 18:06:11.412909 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.413164 kubelet[2724]: E0514 18:06:11.412922 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.413291 kubelet[2724]: E0514 18:06:11.413203 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.413291 kubelet[2724]: W0514 18:06:11.413216 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.413291 kubelet[2724]: E0514 18:06:11.413240 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.413542 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.414478 kubelet[2724]: W0514 18:06:11.413557 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.413570 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.413871 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.414478 kubelet[2724]: W0514 18:06:11.413883 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.413901 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.414194 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.414478 kubelet[2724]: W0514 18:06:11.414207 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.414478 kubelet[2724]: E0514 18:06:11.414244 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.414844 kubelet[2724]: E0514 18:06:11.414588 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.414844 kubelet[2724]: W0514 18:06:11.414601 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.414844 kubelet[2724]: E0514 18:06:11.414616 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.415727 kubelet[2724]: E0514 18:06:11.415367 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.415727 kubelet[2724]: W0514 18:06:11.415387 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.415727 kubelet[2724]: E0514 18:06:11.415402 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.416317 kubelet[2724]: E0514 18:06:11.416018 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.416317 kubelet[2724]: W0514 18:06:11.416033 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.416317 kubelet[2724]: E0514 18:06:11.416049 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.416817 kubelet[2724]: E0514 18:06:11.416638 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.417126 kubelet[2724]: W0514 18:06:11.416914 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.417126 kubelet[2724]: E0514 18:06:11.416946 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.417572 kubelet[2724]: E0514 18:06:11.417380 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.417572 kubelet[2724]: W0514 18:06:11.417398 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.417572 kubelet[2724]: E0514 18:06:11.417415 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.420099 kubelet[2724]: E0514 18:06:11.419933 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.420262 kubelet[2724]: W0514 18:06:11.420098 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.420262 kubelet[2724]: E0514 18:06:11.420138 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.420262 kubelet[2724]: I0514 18:06:11.420202 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhxs\" (UniqueName: \"kubernetes.io/projected/84f10dc4-cc8f-4f62-914c-3e3369d05915-kube-api-access-tdhxs\") pod \"csi-node-driver-kb4r2\" (UID: \"84f10dc4-cc8f-4f62-914c-3e3369d05915\") " pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:11.422190 kubelet[2724]: E0514 18:06:11.421371 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.422190 kubelet[2724]: W0514 18:06:11.421404 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.422190 kubelet[2724]: E0514 18:06:11.421443 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.422190 kubelet[2724]: E0514 18:06:11.421737 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.422190 kubelet[2724]: W0514 18:06:11.421760 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.422190 kubelet[2724]: E0514 18:06:11.421795 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.424106 kubelet[2724]: E0514 18:06:11.422534 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.424106 kubelet[2724]: W0514 18:06:11.422551 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.424106 kubelet[2724]: E0514 18:06:11.422570 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.424106 kubelet[2724]: I0514 18:06:11.422629 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84f10dc4-cc8f-4f62-914c-3e3369d05915-socket-dir\") pod \"csi-node-driver-kb4r2\" (UID: \"84f10dc4-cc8f-4f62-914c-3e3369d05915\") " pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:11.424106 kubelet[2724]: E0514 18:06:11.422893 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.424106 kubelet[2724]: W0514 18:06:11.422910 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.424106 kubelet[2724]: E0514 18:06:11.422932 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.424106 kubelet[2724]: E0514 18:06:11.423467 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.424106 kubelet[2724]: W0514 18:06:11.423483 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.424392 kubelet[2724]: E0514 18:06:11.423514 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.425090 kubelet[2724]: E0514 18:06:11.425063 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.425090 kubelet[2724]: W0514 18:06:11.425084 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.425282 kubelet[2724]: E0514 18:06:11.425104 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.425282 kubelet[2724]: I0514 18:06:11.425156 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84f10dc4-cc8f-4f62-914c-3e3369d05915-registration-dir\") pod \"csi-node-driver-kb4r2\" (UID: \"84f10dc4-cc8f-4f62-914c-3e3369d05915\") " pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.425490 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.426690 kubelet[2724]: W0514 18:06:11.425508 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.425528 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.425760 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.426690 kubelet[2724]: W0514 18:06:11.425772 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.425797 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.426064 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.426690 kubelet[2724]: W0514 18:06:11.426077 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.426690 kubelet[2724]: E0514 18:06:11.426092 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.427041 kubelet[2724]: I0514 18:06:11.426137 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/84f10dc4-cc8f-4f62-914c-3e3369d05915-varrun\") pod \"csi-node-driver-kb4r2\" (UID: \"84f10dc4-cc8f-4f62-914c-3e3369d05915\") " pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:11.427041 kubelet[2724]: E0514 18:06:11.426576 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.427041 kubelet[2724]: W0514 18:06:11.426592 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.427041 kubelet[2724]: E0514 18:06:11.426625 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.427041 kubelet[2724]: I0514 18:06:11.426654 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84f10dc4-cc8f-4f62-914c-3e3369d05915-kubelet-dir\") pod \"csi-node-driver-kb4r2\" (UID: \"84f10dc4-cc8f-4f62-914c-3e3369d05915\") " pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:11.428052 kubelet[2724]: E0514 18:06:11.427308 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.428052 kubelet[2724]: W0514 18:06:11.427328 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.428052 kubelet[2724]: E0514 18:06:11.427364 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.428164 kubelet[2724]: E0514 18:06:11.428136 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.428164 kubelet[2724]: W0514 18:06:11.428151 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.428267 kubelet[2724]: E0514 18:06:11.428179 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.428750 kubelet[2724]: E0514 18:06:11.428440 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.428750 kubelet[2724]: W0514 18:06:11.428469 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.428750 kubelet[2724]: E0514 18:06:11.428483 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.428750 kubelet[2724]: E0514 18:06:11.428691 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.428750 kubelet[2724]: W0514 18:06:11.428702 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.428750 kubelet[2724]: E0514 18:06:11.428716 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.464221 containerd[1545]: time="2025-05-14T18:06:11.464055271Z" level=info msg="connecting to shim b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704" address="unix:///run/containerd/s/d317632ae470590b543485cf0c0bed7f5804134a8f4cd9afd0cc5c4e031e0e03" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:11.526428 systemd[1]: Started cri-containerd-b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704.scope - libcontainer container b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704. May 14 18:06:11.528895 kubelet[2724]: E0514 18:06:11.528850 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.528895 kubelet[2724]: W0514 18:06:11.528881 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.529079 kubelet[2724]: E0514 18:06:11.528931 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.530024 kubelet[2724]: E0514 18:06:11.529735 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.530024 kubelet[2724]: W0514 18:06:11.529805 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.530024 kubelet[2724]: E0514 18:06:11.529831 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.530161 kubelet[2724]: E0514 18:06:11.530066 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.530161 kubelet[2724]: W0514 18:06:11.530075 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.530161 kubelet[2724]: E0514 18:06:11.530084 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.531645 kubelet[2724]: E0514 18:06:11.530357 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.531645 kubelet[2724]: W0514 18:06:11.530370 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.531645 kubelet[2724]: E0514 18:06:11.530391 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.531645 kubelet[2724]: E0514 18:06:11.530620 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.531645 kubelet[2724]: W0514 18:06:11.530638 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.531645 kubelet[2724]: E0514 18:06:11.530655 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532066 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533476 kubelet[2724]: W0514 18:06:11.532083 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532133 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532403 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533476 kubelet[2724]: W0514 18:06:11.532447 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532458 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532785 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533476 kubelet[2724]: W0514 18:06:11.532793 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.532812 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533476 kubelet[2724]: E0514 18:06:11.533011 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533868 kubelet[2724]: W0514 18:06:11.533017 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533033 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533234 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533868 kubelet[2724]: W0514 18:06:11.533247 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533270 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533498 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533868 kubelet[2724]: W0514 18:06:11.533507 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533523 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.533868 kubelet[2724]: E0514 18:06:11.533747 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.533868 kubelet[2724]: W0514 18:06:11.533755 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.533890 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.534175 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.536481 kubelet[2724]: W0514 18:06:11.534184 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.534215 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.534724 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.536481 kubelet[2724]: W0514 18:06:11.534733 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.534796 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.535133 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.536481 kubelet[2724]: W0514 18:06:11.535178 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.536481 kubelet[2724]: E0514 18:06:11.535250 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.536741 kubelet[2724]: E0514 18:06:11.535708 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.536741 kubelet[2724]: W0514 18:06:11.535719 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.536741 kubelet[2724]: E0514 18:06:11.535732 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.536741 kubelet[2724]: E0514 18:06:11.536730 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.539889 kubelet[2724]: W0514 18:06:11.536744 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.539889 kubelet[2724]: E0514 18:06:11.537278 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.539889 kubelet[2724]: E0514 18:06:11.538041 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.539889 kubelet[2724]: W0514 18:06:11.538054 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.539889 kubelet[2724]: E0514 18:06:11.538580 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.539889 kubelet[2724]: E0514 18:06:11.539093 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.539889 kubelet[2724]: W0514 18:06:11.539107 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.539889 kubelet[2724]: E0514 18:06:11.539441 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.541507 kubelet[2724]: E0514 18:06:11.541465 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.541507 kubelet[2724]: W0514 18:06:11.541492 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.542962 kubelet[2724]: E0514 18:06:11.541896 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.542962 kubelet[2724]: E0514 18:06:11.542443 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.542962 kubelet[2724]: W0514 18:06:11.542460 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.542962 kubelet[2724]: E0514 18:06:11.543010 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.542962 kubelet[2724]: E0514 18:06:11.543159 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.542962 kubelet[2724]: W0514 18:06:11.543171 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.542962 kubelet[2724]: E0514 18:06:11.543192 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.544902 kubelet[2724]: E0514 18:06:11.544879 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.544902 kubelet[2724]: W0514 18:06:11.544899 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.545051 kubelet[2724]: E0514 18:06:11.545004 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.546613 kubelet[2724]: E0514 18:06:11.546584 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.546613 kubelet[2724]: W0514 18:06:11.546608 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.546771 kubelet[2724]: E0514 18:06:11.546634 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.546771 kubelet[2724]: E0514 18:06:11.547214 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.546771 kubelet[2724]: W0514 18:06:11.547230 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.546771 kubelet[2724]: E0514 18:06:11.547246 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.572001 kubelet[2724]: E0514 18:06:11.571945 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:06:11.572001 kubelet[2724]: W0514 18:06:11.571994 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:06:11.572001 kubelet[2724]: E0514 18:06:11.572025 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:06:11.656305 containerd[1545]: time="2025-05-14T18:06:11.656230946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5qgr,Uid:217d77fa-0460-4fdd-b359-c36dd4ff1be7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\"" May 14 18:06:11.658359 kubelet[2724]: E0514 18:06:11.658308 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:11.660887 containerd[1545]: time="2025-05-14T18:06:11.660539286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:06:11.713043 containerd[1545]: time="2025-05-14T18:06:11.712424946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf9666477-r646v,Uid:d8317ebc-629d-4f14-93e0-a140724cee5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d\"" May 14 18:06:11.715160 kubelet[2724]: E0514 18:06:11.715130 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:12.808666 kubelet[2724]: E0514 18:06:12.808592 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:13.165515 containerd[1545]: time="2025-05-14T18:06:13.165135313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.167587 containerd[1545]: time="2025-05-14T18:06:13.167495439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 18:06:13.168167 containerd[1545]: time="2025-05-14T18:06:13.168119486Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.172216 containerd[1545]: time="2025-05-14T18:06:13.172148516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:13.174179 containerd[1545]: time="2025-05-14T18:06:13.173828684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.513245976s" May 14 18:06:13.174179 containerd[1545]: time="2025-05-14T18:06:13.173895294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 18:06:13.177771 containerd[1545]: time="2025-05-14T18:06:13.177657200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:06:13.180327 containerd[1545]: time="2025-05-14T18:06:13.180269407Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:06:13.196369 containerd[1545]: time="2025-05-14T18:06:13.196122038Z" level=info msg="Container a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:13.202593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559304304.mount: Deactivated successfully. May 14 18:06:13.218898 containerd[1545]: time="2025-05-14T18:06:13.218822188Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\"" May 14 18:06:13.221127 containerd[1545]: time="2025-05-14T18:06:13.220994774Z" level=info msg="StartContainer for \"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\"" May 14 18:06:13.225425 containerd[1545]: time="2025-05-14T18:06:13.225292672Z" level=info msg="connecting to shim a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001" address="unix:///run/containerd/s/d317632ae470590b543485cf0c0bed7f5804134a8f4cd9afd0cc5c4e031e0e03" protocol=ttrpc version=3 May 14 18:06:13.273509 systemd[1]: Started cri-containerd-a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001.scope - libcontainer container a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001. May 14 18:06:13.373828 containerd[1545]: time="2025-05-14T18:06:13.373702630Z" level=info msg="StartContainer for \"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\" returns successfully" May 14 18:06:13.379897 systemd[1]: cri-containerd-a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001.scope: Deactivated successfully. May 14 18:06:13.393809 containerd[1545]: time="2025-05-14T18:06:13.393575670Z" level=info msg="received exit event container_id:\"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\" id:\"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\" pid:3311 exited_at:{seconds:1747245973 nanos:391752166}" May 14 18:06:13.394296 containerd[1545]: time="2025-05-14T18:06:13.394249572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\" id:\"a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001\" pid:3311 exited_at:{seconds:1747245973 nanos:391752166}" May 14 18:06:13.461920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a157338e67d826b0e65425dd214730970550218933e04040b823250b63c5f001-rootfs.mount: Deactivated successfully. May 14 18:06:13.935394 kubelet[2724]: E0514 18:06:13.935048 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:14.809528 kubelet[2724]: E0514 18:06:14.809449 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:16.161562 containerd[1545]: time="2025-05-14T18:06:16.161492306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.164280 containerd[1545]: time="2025-05-14T18:06:16.164179418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 18:06:16.165539 containerd[1545]: time="2025-05-14T18:06:16.165456849Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.169713 containerd[1545]: time="2025-05-14T18:06:16.169643185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:16.171409 containerd[1545]: time="2025-05-14T18:06:16.171345859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.993636763s" May 14 18:06:16.171409 containerd[1545]: time="2025-05-14T18:06:16.171405720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 18:06:16.176914 containerd[1545]: time="2025-05-14T18:06:16.176842412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:06:16.203010 containerd[1545]: time="2025-05-14T18:06:16.202675803Z" level=info msg="CreateContainer within sandbox \"0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:06:16.215530 containerd[1545]: time="2025-05-14T18:06:16.214401475Z" level=info msg="Container 7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:16.230479 containerd[1545]: time="2025-05-14T18:06:16.229843176Z" level=info msg="CreateContainer within sandbox \"0e0f75a1fd6c5877de98de3456e4215bad53fe7d81e537dc3222fb1648f6813d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f\"" May 14 18:06:16.233378 containerd[1545]: time="2025-05-14T18:06:16.233279737Z" level=info msg="StartContainer for \"7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f\"" May 14 18:06:16.235940 containerd[1545]: time="2025-05-14T18:06:16.235744510Z" level=info msg="connecting to shim 7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f" address="unix:///run/containerd/s/17f7f5d5b4f05f8b41ea82360ed690e179fccd4a3a40fd403bd3cd25fc943921" protocol=ttrpc version=3 May 14 18:06:16.278712 systemd[1]: Started cri-containerd-7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f.scope - libcontainer container 7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f. May 14 18:06:16.367147 containerd[1545]: time="2025-05-14T18:06:16.367043601Z" level=info msg="StartContainer for \"7151ceea55d825e9902008c5753afefe27b6f4f2ea9906d8d92eddfbd512dd5f\" returns successfully" May 14 18:06:16.808939 kubelet[2724]: E0514 18:06:16.808875 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:16.946879 kubelet[2724]: E0514 18:06:16.946830 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:16.970101 kubelet[2724]: I0514 18:06:16.969533 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bf9666477-r646v" podStartSLOduration=2.512308257 podStartE2EDuration="6.969504065s" podCreationTimestamp="2025-05-14 18:06:10 +0000 UTC" firstStartedPulling="2025-05-14 18:06:11.715802648 +0000 UTC m=+12.158241585" lastFinishedPulling="2025-05-14 18:06:16.172998436 +0000 UTC m=+16.615437393" observedRunningTime="2025-05-14 18:06:16.969332546 +0000 UTC m=+17.411771500" watchObservedRunningTime="2025-05-14 18:06:16.969504065 +0000 UTC m=+17.411943024" May 14 18:06:17.949009 kubelet[2724]: I0514 18:06:17.948319 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:17.950303 kubelet[2724]: E0514 18:06:17.950256 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:18.458350 systemd[1]: Started sshd@11-165.232.128.115:22-185.233.247.245:41356.service - OpenSSH per-connection server daemon (185.233.247.245:41356). May 14 18:06:18.811483 kubelet[2724]: E0514 18:06:18.808851 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:18.895928 sshd[3392]: Connection closed by 185.233.247.245 port 41356 [preauth] May 14 18:06:18.898797 systemd[1]: sshd@11-165.232.128.115:22-185.233.247.245:41356.service: Deactivated successfully. May 14 18:06:20.810348 kubelet[2724]: E0514 18:06:20.810273 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:20.893057 containerd[1545]: time="2025-05-14T18:06:20.892532999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.893794 containerd[1545]: time="2025-05-14T18:06:20.893741470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 18:06:20.894853 containerd[1545]: time="2025-05-14T18:06:20.894796664Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.899471 containerd[1545]: time="2025-05-14T18:06:20.898633300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:20.899471 containerd[1545]: time="2025-05-14T18:06:20.899256629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.7223496s" May 14 18:06:20.899471 containerd[1545]: time="2025-05-14T18:06:20.899304455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 18:06:20.904543 containerd[1545]: time="2025-05-14T18:06:20.904458484Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:06:20.919018 containerd[1545]: time="2025-05-14T18:06:20.915943880Z" level=info msg="Container 1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:20.948743 containerd[1545]: time="2025-05-14T18:06:20.948655237Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\"" May 14 18:06:20.951244 containerd[1545]: time="2025-05-14T18:06:20.949960702Z" level=info msg="StartContainer for \"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\"" May 14 18:06:20.955893 containerd[1545]: time="2025-05-14T18:06:20.955796910Z" level=info msg="connecting to shim 1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d" address="unix:///run/containerd/s/d317632ae470590b543485cf0c0bed7f5804134a8f4cd9afd0cc5c4e031e0e03" protocol=ttrpc version=3 May 14 18:06:20.995477 systemd[1]: Started cri-containerd-1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d.scope - libcontainer container 1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d. May 14 18:06:21.062561 containerd[1545]: time="2025-05-14T18:06:21.061443259Z" level=info msg="StartContainer for \"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\" returns successfully" May 14 18:06:21.690741 systemd[1]: cri-containerd-1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d.scope: Deactivated successfully. May 14 18:06:21.691198 systemd[1]: cri-containerd-1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d.scope: Consumed 642ms CPU time, 147.5M memory peak, 1.3M read from disk, 154M written to disk. May 14 18:06:21.696604 containerd[1545]: time="2025-05-14T18:06:21.696469664Z" level=info msg="received exit event container_id:\"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\" id:\"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\" pid:3412 exited_at:{seconds:1747245981 nanos:696191600}" May 14 18:06:21.699294 containerd[1545]: time="2025-05-14T18:06:21.698225761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\" id:\"1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d\" pid:3412 exited_at:{seconds:1747245981 nanos:696191600}" May 14 18:06:21.774046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff40732821c26ee0af1a9ead2da4675f468229bb78405d7acb3d57c7e53c33d-rootfs.mount: Deactivated successfully. May 14 18:06:21.783012 kubelet[2724]: I0514 18:06:21.782909 2724 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:06:21.869248 systemd[1]: Created slice kubepods-burstable-pod9fab24c1_843c_40db_97fd_96c58a50a664.slice - libcontainer container kubepods-burstable-pod9fab24c1_843c_40db_97fd_96c58a50a664.slice. May 14 18:06:21.892770 systemd[1]: Created slice kubepods-burstable-podc2d5d800_40c9_4438_a2fe_246b06f08733.slice - libcontainer container kubepods-burstable-podc2d5d800_40c9_4438_a2fe_246b06f08733.slice. May 14 18:06:21.900063 systemd[1]: Created slice kubepods-besteffort-pod253eda59_298c_4556_85be_196af6b421d6.slice - libcontainer container kubepods-besteffort-pod253eda59_298c_4556_85be_196af6b421d6.slice. May 14 18:06:21.914044 systemd[1]: Created slice kubepods-besteffort-pod40210f65_4147_4ad9_b76d_96768f3310a8.slice - libcontainer container kubepods-besteffort-pod40210f65_4147_4ad9_b76d_96768f3310a8.slice. May 14 18:06:21.926316 systemd[1]: Created slice kubepods-besteffort-pod5d3f6e28_37fa_44c3_a678_e3e913c44052.slice - libcontainer container kubepods-besteffort-pod5d3f6e28_37fa_44c3_a678_e3e913c44052.slice. May 14 18:06:21.933169 kubelet[2724]: I0514 18:06:21.933105 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf9lm\" (UniqueName: \"kubernetes.io/projected/253eda59-298c-4556-85be-196af6b421d6-kube-api-access-jf9lm\") pod \"calico-apiserver-5c66df7d94-5lcgl\" (UID: \"253eda59-298c-4556-85be-196af6b421d6\") " pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" May 14 18:06:21.933825 kubelet[2724]: I0514 18:06:21.933799 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40210f65-4147-4ad9-b76d-96768f3310a8-calico-apiserver-certs\") pod \"calico-apiserver-5c66df7d94-gk6v7\" (UID: \"40210f65-4147-4ad9-b76d-96768f3310a8\") " pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" May 14 18:06:21.934423 kubelet[2724]: I0514 18:06:21.933919 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b866z\" (UniqueName: \"kubernetes.io/projected/5d3f6e28-37fa-44c3-a678-e3e913c44052-kube-api-access-b866z\") pod \"calico-kube-controllers-6bd7bcbdff-92nwd\" (UID: \"5d3f6e28-37fa-44c3-a678-e3e913c44052\") " pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" May 14 18:06:21.934423 kubelet[2724]: I0514 18:06:21.934356 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fab24c1-843c-40db-97fd-96c58a50a664-config-volume\") pod \"coredns-6f6b679f8f-khs2x\" (UID: \"9fab24c1-843c-40db-97fd-96c58a50a664\") " pod="kube-system/coredns-6f6b679f8f-khs2x" May 14 18:06:21.934696 kubelet[2724]: I0514 18:06:21.934606 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/253eda59-298c-4556-85be-196af6b421d6-calico-apiserver-certs\") pod \"calico-apiserver-5c66df7d94-5lcgl\" (UID: \"253eda59-298c-4556-85be-196af6b421d6\") " pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" May 14 18:06:21.934877 kubelet[2724]: I0514 18:06:21.934765 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphnq\" (UniqueName: \"kubernetes.io/projected/c2d5d800-40c9-4438-a2fe-246b06f08733-kube-api-access-kphnq\") pod \"coredns-6f6b679f8f-2npwk\" (UID: \"c2d5d800-40c9-4438-a2fe-246b06f08733\") " pod="kube-system/coredns-6f6b679f8f-2npwk" May 14 18:06:21.934877 kubelet[2724]: I0514 18:06:21.934793 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzzbb\" (UniqueName: \"kubernetes.io/projected/40210f65-4147-4ad9-b76d-96768f3310a8-kube-api-access-dzzbb\") pod \"calico-apiserver-5c66df7d94-gk6v7\" (UID: \"40210f65-4147-4ad9-b76d-96768f3310a8\") " pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" May 14 18:06:21.935060 kubelet[2724]: I0514 18:06:21.935042 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d3f6e28-37fa-44c3-a678-e3e913c44052-tigera-ca-bundle\") pod \"calico-kube-controllers-6bd7bcbdff-92nwd\" (UID: \"5d3f6e28-37fa-44c3-a678-e3e913c44052\") " pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" May 14 18:06:21.935859 kubelet[2724]: I0514 18:06:21.935788 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d5d800-40c9-4438-a2fe-246b06f08733-config-volume\") pod \"coredns-6f6b679f8f-2npwk\" (UID: \"c2d5d800-40c9-4438-a2fe-246b06f08733\") " pod="kube-system/coredns-6f6b679f8f-2npwk" May 14 18:06:21.935859 kubelet[2724]: I0514 18:06:21.935829 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cgl\" (UniqueName: \"kubernetes.io/projected/9fab24c1-843c-40db-97fd-96c58a50a664-kube-api-access-h5cgl\") pod \"coredns-6f6b679f8f-khs2x\" (UID: \"9fab24c1-843c-40db-97fd-96c58a50a664\") " pod="kube-system/coredns-6f6b679f8f-khs2x" May 14 18:06:21.989261 kubelet[2724]: E0514 18:06:21.988848 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:21.992200 containerd[1545]: time="2025-05-14T18:06:21.992162722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:06:22.185278 kubelet[2724]: E0514 18:06:22.185177 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:22.187079 containerd[1545]: time="2025-05-14T18:06:22.186590150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khs2x,Uid:9fab24c1-843c-40db-97fd-96c58a50a664,Namespace:kube-system,Attempt:0,}" May 14 18:06:22.207358 kubelet[2724]: E0514 18:06:22.206327 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:22.209396 containerd[1545]: time="2025-05-14T18:06:22.208166476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2npwk,Uid:c2d5d800-40c9-4438-a2fe-246b06f08733,Namespace:kube-system,Attempt:0,}" May 14 18:06:22.225411 containerd[1545]: time="2025-05-14T18:06:22.223657204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-5lcgl,Uid:253eda59-298c-4556-85be-196af6b421d6,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:22.278059 containerd[1545]: time="2025-05-14T18:06:22.277855876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd7bcbdff-92nwd,Uid:5d3f6e28-37fa-44c3-a678-e3e913c44052,Namespace:calico-system,Attempt:0,}" May 14 18:06:22.290458 containerd[1545]: time="2025-05-14T18:06:22.288645998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-gk6v7,Uid:40210f65-4147-4ad9-b76d-96768f3310a8,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:22.698587 containerd[1545]: time="2025-05-14T18:06:22.698515743Z" level=error msg="Failed to destroy network for sandbox \"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.708303 containerd[1545]: time="2025-05-14T18:06:22.708224630Z" level=error msg="Failed to destroy network for sandbox \"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.732098 containerd[1545]: time="2025-05-14T18:06:22.732013942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd7bcbdff-92nwd,Uid:5d3f6e28-37fa-44c3-a678-e3e913c44052,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.736753 containerd[1545]: time="2025-05-14T18:06:22.736392571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-5lcgl,Uid:253eda59-298c-4556-85be-196af6b421d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.736753 containerd[1545]: time="2025-05-14T18:06:22.736612098Z" level=error msg="Failed to destroy network for sandbox \"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.739661 containerd[1545]: time="2025-05-14T18:06:22.739124750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-gk6v7,Uid:40210f65-4147-4ad9-b76d-96768f3310a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.742660 kubelet[2724]: E0514 18:06:22.741574 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.742660 kubelet[2724]: E0514 18:06:22.741715 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" May 14 18:06:22.742660 kubelet[2724]: E0514 18:06:22.741748 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" May 14 18:06:22.742945 kubelet[2724]: E0514 18:06:22.741823 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bd7bcbdff-92nwd_calico-system(5d3f6e28-37fa-44c3-a678-e3e913c44052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bd7bcbdff-92nwd_calico-system(5d3f6e28-37fa-44c3-a678-e3e913c44052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b9af23a59585813228565afa938f8c49cce5701c1ea9db68eaa596fd028c9fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" podUID="5d3f6e28-37fa-44c3-a678-e3e913c44052" May 14 18:06:22.742945 kubelet[2724]: E0514 18:06:22.742116 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.742945 kubelet[2724]: E0514 18:06:22.742224 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" May 14 18:06:22.743178 kubelet[2724]: E0514 18:06:22.742268 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" May 14 18:06:22.743178 kubelet[2724]: E0514 18:06:22.742337 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c66df7d94-gk6v7_calico-apiserver(40210f65-4147-4ad9-b76d-96768f3310a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c66df7d94-gk6v7_calico-apiserver(40210f65-4147-4ad9-b76d-96768f3310a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b7778ee5a773d12d9bc8b2de0f46e6dd434c3ce145cbda07c78532eb29e6d8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" podUID="40210f65-4147-4ad9-b76d-96768f3310a8" May 14 18:06:22.745568 kubelet[2724]: E0514 18:06:22.743387 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.745568 kubelet[2724]: E0514 18:06:22.743469 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" May 14 18:06:22.745568 kubelet[2724]: E0514 18:06:22.743507 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" May 14 18:06:22.745863 kubelet[2724]: E0514 18:06:22.745138 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c66df7d94-5lcgl_calico-apiserver(253eda59-298c-4556-85be-196af6b421d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c66df7d94-5lcgl_calico-apiserver(253eda59-298c-4556-85be-196af6b421d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b40fbd4d768c4d2bbced2822a0204af6fa224ea3128f29b2e36a1023bffa080\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" podUID="253eda59-298c-4556-85be-196af6b421d6" May 14 18:06:22.758963 containerd[1545]: time="2025-05-14T18:06:22.758719529Z" level=error msg="Failed to destroy network for sandbox \"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.761716 containerd[1545]: time="2025-05-14T18:06:22.761211929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2npwk,Uid:c2d5d800-40c9-4438-a2fe-246b06f08733,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.761934 kubelet[2724]: E0514 18:06:22.761556 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.763096 kubelet[2724]: E0514 18:06:22.761692 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2npwk" May 14 18:06:22.763096 kubelet[2724]: E0514 18:06:22.762276 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2npwk" May 14 18:06:22.763096 kubelet[2724]: E0514 18:06:22.762402 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2npwk_kube-system(c2d5d800-40c9-4438-a2fe-246b06f08733)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2npwk_kube-system(c2d5d800-40c9-4438-a2fe-246b06f08733)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dc551ca36adcc4e387226525e4b21410abfb43aae24e8b110e4a815794dd884\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2npwk" podUID="c2d5d800-40c9-4438-a2fe-246b06f08733" May 14 18:06:22.775299 containerd[1545]: time="2025-05-14T18:06:22.775234481Z" level=error msg="Failed to destroy network for sandbox \"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.776761 containerd[1545]: time="2025-05-14T18:06:22.776697320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khs2x,Uid:9fab24c1-843c-40db-97fd-96c58a50a664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.779594 kubelet[2724]: E0514 18:06:22.777103 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.779594 kubelet[2724]: E0514 18:06:22.777196 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-khs2x" May 14 18:06:22.779594 kubelet[2724]: E0514 18:06:22.777225 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-khs2x" May 14 18:06:22.779795 kubelet[2724]: E0514 18:06:22.777305 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-khs2x_kube-system(9fab24c1-843c-40db-97fd-96c58a50a664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-khs2x_kube-system(9fab24c1-843c-40db-97fd-96c58a50a664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7011615ef5dd90815b5c6f360551a8c5c8840a4dff22c19082b4459a41c31593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-khs2x" podUID="9fab24c1-843c-40db-97fd-96c58a50a664" May 14 18:06:22.820047 systemd[1]: Created slice kubepods-besteffort-pod84f10dc4_cc8f_4f62_914c_3e3369d05915.slice - libcontainer container kubepods-besteffort-pod84f10dc4_cc8f_4f62_914c_3e3369d05915.slice. May 14 18:06:22.826384 containerd[1545]: time="2025-05-14T18:06:22.824713282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kb4r2,Uid:84f10dc4-cc8f-4f62-914c-3e3369d05915,Namespace:calico-system,Attempt:0,}" May 14 18:06:22.922787 containerd[1545]: time="2025-05-14T18:06:22.922616503Z" level=error msg="Failed to destroy network for sandbox \"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.924091 containerd[1545]: time="2025-05-14T18:06:22.923958380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kb4r2,Uid:84f10dc4-cc8f-4f62-914c-3e3369d05915,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.924535 kubelet[2724]: E0514 18:06:22.924478 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:06:22.924840 kubelet[2724]: E0514 18:06:22.924738 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:22.925026 kubelet[2724]: E0514 18:06:22.924804 2724 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kb4r2" May 14 18:06:22.925212 kubelet[2724]: E0514 18:06:22.925113 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kb4r2_calico-system(84f10dc4-cc8f-4f62-914c-3e3369d05915)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kb4r2_calico-system(84f10dc4-cc8f-4f62-914c-3e3369d05915)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbae0a32b985c68c84034c33a9f74fc878d59f569726531f60de85f877be2efa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kb4r2" podUID="84f10dc4-cc8f-4f62-914c-3e3369d05915" May 14 18:06:24.106218 kubelet[2724]: I0514 18:06:24.105180 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:24.106218 kubelet[2724]: E0514 18:06:24.105607 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:24.996651 kubelet[2724]: E0514 18:06:24.996606 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:28.420518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638100350.mount: Deactivated successfully. May 14 18:06:28.502064 containerd[1545]: time="2025-05-14T18:06:28.501893502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:28.503758 containerd[1545]: time="2025-05-14T18:06:28.503701120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 18:06:28.504738 containerd[1545]: time="2025-05-14T18:06:28.504647876Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:28.509791 containerd[1545]: time="2025-05-14T18:06:28.509586086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:28.510950 containerd[1545]: time="2025-05-14T18:06:28.510883910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.518161662s" May 14 18:06:28.510950 containerd[1545]: time="2025-05-14T18:06:28.510942359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 18:06:28.547660 containerd[1545]: time="2025-05-14T18:06:28.547613740Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:06:28.560337 containerd[1545]: time="2025-05-14T18:06:28.560279108Z" level=info msg="Container 7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:28.567925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049693051.mount: Deactivated successfully. May 14 18:06:28.663059 containerd[1545]: time="2025-05-14T18:06:28.662960712Z" level=info msg="CreateContainer within sandbox \"b2b2e54ddfd90b380d3cdd59ce91e4effbe6dad6897fb92de7cc1ddf22765704\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\"" May 14 18:06:28.664012 containerd[1545]: time="2025-05-14T18:06:28.663938811Z" level=info msg="StartContainer for \"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\"" May 14 18:06:28.666285 containerd[1545]: time="2025-05-14T18:06:28.666239500Z" level=info msg="connecting to shim 7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490" address="unix:///run/containerd/s/d317632ae470590b543485cf0c0bed7f5804134a8f4cd9afd0cc5c4e031e0e03" protocol=ttrpc version=3 May 14 18:06:28.716320 systemd[1]: Started cri-containerd-7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490.scope - libcontainer container 7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490. May 14 18:06:28.793397 containerd[1545]: time="2025-05-14T18:06:28.793246328Z" level=info msg="StartContainer for \"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\" returns successfully" May 14 18:06:29.019987 kubelet[2724]: E0514 18:06:29.018403 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:29.108562 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:06:29.108723 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:06:30.020027 kubelet[2724]: I0514 18:06:30.019711 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:30.021176 kubelet[2724]: E0514 18:06:30.021128 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:31.075061 kubelet[2724]: I0514 18:06:31.074997 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:31.088364 kubelet[2724]: E0514 18:06:31.088101 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:31.143601 containerd[1545]: time="2025-05-14T18:06:31.143498927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\" id:\"b82b8bc616df22d157f5c2da2ed87aa5acec44bc9a610b60828b44c9d5bfca11\" pid:3753 exit_status:1 exited_at:{seconds:1747245991 nanos:142497598}" May 14 18:06:31.551783 containerd[1545]: time="2025-05-14T18:06:31.551684624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\" id:\"ffe1d8441f39e3e0d8c8b4b4ebad95661f14a427a2c1c19fa44bd87fc635fe70\" pid:3842 exit_status:1 exited_at:{seconds:1747245991 nanos:550654392}" May 14 18:06:31.803509 systemd-networkd[1440]: vxlan.calico: Link UP May 14 18:06:31.803521 systemd-networkd[1440]: vxlan.calico: Gained carrier May 14 18:06:33.634349 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL May 14 18:06:33.810730 kubelet[2724]: E0514 18:06:33.810647 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:33.812277 containerd[1545]: time="2025-05-14T18:06:33.812213883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-gk6v7,Uid:40210f65-4147-4ad9-b76d-96768f3310a8,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:33.813741 containerd[1545]: time="2025-05-14T18:06:33.813548448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2npwk,Uid:c2d5d800-40c9-4438-a2fe-246b06f08733,Namespace:kube-system,Attempt:0,}" May 14 18:06:34.225052 systemd-networkd[1440]: calidac75ac4e7c: Link UP May 14 18:06:34.232272 systemd-networkd[1440]: calidac75ac4e7c: Gained carrier May 14 18:06:34.268166 kubelet[2724]: I0514 18:06:34.267304 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t5qgr" podStartSLOduration=6.415295378 podStartE2EDuration="23.267256071s" podCreationTimestamp="2025-05-14 18:06:11 +0000 UTC" firstStartedPulling="2025-05-14 18:06:11.660057063 +0000 UTC m=+12.102496010" lastFinishedPulling="2025-05-14 18:06:28.512017767 +0000 UTC m=+28.954456703" observedRunningTime="2025-05-14 18:06:29.039462086 +0000 UTC m=+29.481901046" watchObservedRunningTime="2025-05-14 18:06:34.267256071 +0000 UTC m=+34.709695042" May 14 18:06:34.274145 containerd[1545]: 2025-05-14 18:06:33.918 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0 calico-apiserver-5c66df7d94- calico-apiserver 40210f65-4147-4ad9-b76d-96768f3310a8 737 0 2025-05-14 18:06:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c66df7d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c calico-apiserver-5c66df7d94-gk6v7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidac75ac4e7c [] []}} ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-" May 14 18:06:34.274145 containerd[1545]: 2025-05-14 18:06:33.919 [INFO][3943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.274145 containerd[1545]: 2025-05-14 18:06:34.118 [INFO][3969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" HandleID="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.143 [INFO][3969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" HandleID="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103d00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"calico-apiserver-5c66df7d94-gk6v7", "timestamp":"2025-05-14 18:06:34.118118281 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.143 [INFO][3969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.144 [INFO][3969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.144 [INFO][3969] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.151 [INFO][3969] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.163 [INFO][3969] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.173 [INFO][3969] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.176 [INFO][3969] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275061 containerd[1545]: 2025-05-14 18:06:34.182 [INFO][3969] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.182 [INFO][3969] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.186 [INFO][3969] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.194 [INFO][3969] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.203 [INFO][3969] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.65/26] block=192.168.13.64/26 handle="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.203 [INFO][3969] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.65/26] handle="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.204 [INFO][3969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:34.275828 containerd[1545]: 2025-05-14 18:06:34.204 [INFO][3969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.65/26] IPv6=[] ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" HandleID="k8s-pod-network.614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.276910 containerd[1545]: 2025-05-14 18:06:34.211 [INFO][3943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0", GenerateName:"calico-apiserver-5c66df7d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"40210f65-4147-4ad9-b76d-96768f3310a8", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c66df7d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"calico-apiserver-5c66df7d94-gk6v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidac75ac4e7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:34.277185 containerd[1545]: 2025-05-14 18:06:34.212 [INFO][3943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.65/32] ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.277185 containerd[1545]: 2025-05-14 18:06:34.212 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidac75ac4e7c ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.277185 containerd[1545]: 2025-05-14 18:06:34.234 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.277749 containerd[1545]: 2025-05-14 18:06:34.237 [INFO][3943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0", GenerateName:"calico-apiserver-5c66df7d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"40210f65-4147-4ad9-b76d-96768f3310a8", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c66df7d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f", Pod:"calico-apiserver-5c66df7d94-gk6v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidac75ac4e7c", MAC:"8e:a7:6c:01:a3:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:34.278128 containerd[1545]: 2025-05-14 18:06:34.263 [INFO][3943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-gk6v7" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--gk6v7-eth0" May 14 18:06:34.385368 containerd[1545]: time="2025-05-14T18:06:34.385299275Z" level=info msg="connecting to shim 614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f" address="unix:///run/containerd/s/b524fdf45e7153675f70608c3c7aaa74b311e41e5a0d14cac75eb207f39388eb" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:34.456301 systemd-networkd[1440]: caliad458f4e742: Link UP May 14 18:06:34.461026 systemd-networkd[1440]: caliad458f4e742: Gained carrier May 14 18:06:34.513100 containerd[1545]: 2025-05-14 18:06:33.917 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0 coredns-6f6b679f8f- kube-system c2d5d800-40c9-4438-a2fe-246b06f08733 736 0 2025-05-14 18:06:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c coredns-6f6b679f8f-2npwk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad458f4e742 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-" May 14 18:06:34.513100 containerd[1545]: 2025-05-14 18:06:33.918 [INFO][3949] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.513100 containerd[1545]: 2025-05-14 18:06:34.118 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" HandleID="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.143 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" HandleID="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"coredns-6f6b679f8f-2npwk", "timestamp":"2025-05-14 18:06:34.118076234 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.143 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.204 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.204 [INFO][3968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.258 [INFO][3968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.282 [INFO][3968] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.302 [INFO][3968] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.317 [INFO][3968] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513387 containerd[1545]: 2025-05-14 18:06:34.329 [INFO][3968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.330 [INFO][3968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.335 [INFO][3968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95 May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.371 [INFO][3968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.408 [INFO][3968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.66/26] block=192.168.13.64/26 handle="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.408 [INFO][3968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.66/26] handle="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.408 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:34.513740 containerd[1545]: 2025-05-14 18:06:34.408 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.66/26] IPv6=[] ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" HandleID="k8s-pod-network.8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.434 [INFO][3949] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2d5d800-40c9-4438-a2fe-246b06f08733", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"coredns-6f6b679f8f-2npwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad458f4e742", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.435 [INFO][3949] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.66/32] ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.435 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad458f4e742 ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.460 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.465 [INFO][3949] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c2d5d800-40c9-4438-a2fe-246b06f08733", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95", Pod:"coredns-6f6b679f8f-2npwk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad458f4e742", MAC:"de:47:95:4d:32:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:34.525633 containerd[1545]: 2025-05-14 18:06:34.492 [INFO][3949] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" Namespace="kube-system" Pod="coredns-6f6b679f8f-2npwk" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--2npwk-eth0" May 14 18:06:34.527304 systemd[1]: Started cri-containerd-614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f.scope - libcontainer container 614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f. May 14 18:06:34.585321 containerd[1545]: time="2025-05-14T18:06:34.585244050Z" level=info msg="connecting to shim 8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95" address="unix:///run/containerd/s/518bcdc506debb4e6bb848050d83b5698ef486a377f226147457c15a372d4fa1" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:34.662347 systemd[1]: Started cri-containerd-8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95.scope - libcontainer container 8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95. May 14 18:06:34.768925 containerd[1545]: time="2025-05-14T18:06:34.768244756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2npwk,Uid:c2d5d800-40c9-4438-a2fe-246b06f08733,Namespace:kube-system,Attempt:0,} returns sandbox id \"8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95\"" May 14 18:06:34.769417 containerd[1545]: time="2025-05-14T18:06:34.768661300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-gk6v7,Uid:40210f65-4147-4ad9-b76d-96768f3310a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f\"" May 14 18:06:34.771964 kubelet[2724]: E0514 18:06:34.771917 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:34.774500 containerd[1545]: time="2025-05-14T18:06:34.774449447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:06:34.778369 containerd[1545]: time="2025-05-14T18:06:34.778304131Z" level=info msg="CreateContainer within sandbox \"8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:06:34.799300 containerd[1545]: time="2025-05-14T18:06:34.799229013Z" level=info msg="Container b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:34.806421 containerd[1545]: time="2025-05-14T18:06:34.806362039Z" level=info msg="CreateContainer within sandbox \"8317209b80a914a6ba8bff832f7068b0572463af00609a68ad5417d667a5bc95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63\"" May 14 18:06:34.807150 containerd[1545]: time="2025-05-14T18:06:34.807115326Z" level=info msg="StartContainer for \"b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63\"" May 14 18:06:34.809892 containerd[1545]: time="2025-05-14T18:06:34.809760860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-5lcgl,Uid:253eda59-298c-4556-85be-196af6b421d6,Namespace:calico-apiserver,Attempt:0,}" May 14 18:06:34.811312 containerd[1545]: time="2025-05-14T18:06:34.810684985Z" level=info msg="connecting to shim b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63" address="unix:///run/containerd/s/518bcdc506debb4e6bb848050d83b5698ef486a377f226147457c15a372d4fa1" protocol=ttrpc version=3 May 14 18:06:34.861842 systemd[1]: Started cri-containerd-b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63.scope - libcontainer container b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63. May 14 18:06:34.943319 containerd[1545]: time="2025-05-14T18:06:34.943243737Z" level=info msg="StartContainer for \"b050dd8605732306ac49225f23d9b9c1d7c94f9f89f002cc8bba57a4cf288f63\" returns successfully" May 14 18:06:35.073420 systemd-networkd[1440]: calic29638d17ad: Link UP May 14 18:06:35.077168 systemd-networkd[1440]: calic29638d17ad: Gained carrier May 14 18:06:35.087574 kubelet[2724]: E0514 18:06:35.087483 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:35.116799 kubelet[2724]: I0514 18:06:35.115803 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2npwk" podStartSLOduration=32.115773848 podStartE2EDuration="32.115773848s" podCreationTimestamp="2025-05-14 18:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:35.110884762 +0000 UTC m=+35.553323713" watchObservedRunningTime="2025-05-14 18:06:35.115773848 +0000 UTC m=+35.558212807" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.888 [INFO][4104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0 calico-apiserver-5c66df7d94- calico-apiserver 253eda59-298c-4556-85be-196af6b421d6 738 0 2025-05-14 18:06:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c66df7d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c calico-apiserver-5c66df7d94-5lcgl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic29638d17ad [] []}} ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.888 [INFO][4104] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.971 [INFO][4136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" HandleID="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.991 [INFO][4136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" HandleID="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"calico-apiserver-5c66df7d94-5lcgl", "timestamp":"2025-05-14 18:06:34.971087842 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.991 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.991 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.991 [INFO][4136] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:34.996 [INFO][4136] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.016 [INFO][4136] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.027 [INFO][4136] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.032 [INFO][4136] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.038 [INFO][4136] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.038 [INFO][4136] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.042 [INFO][4136] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.050 [INFO][4136] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.063 [INFO][4136] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.67/26] block=192.168.13.64/26 handle="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.063 [INFO][4136] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.67/26] handle="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.063 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:35.122145 containerd[1545]: 2025-05-14 18:06:35.063 [INFO][4136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.67/26] IPv6=[] ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" HandleID="k8s-pod-network.46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.068 [INFO][4104] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0", GenerateName:"calico-apiserver-5c66df7d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"253eda59-298c-4556-85be-196af6b421d6", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c66df7d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"calico-apiserver-5c66df7d94-5lcgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic29638d17ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.068 [INFO][4104] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.67/32] ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.068 [INFO][4104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic29638d17ad ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.073 [INFO][4104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.075 [INFO][4104] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0", GenerateName:"calico-apiserver-5c66df7d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"253eda59-298c-4556-85be-196af6b421d6", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c66df7d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d", Pod:"calico-apiserver-5c66df7d94-5lcgl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic29638d17ad", MAC:"1e:48:73:d2:48:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:35.125498 containerd[1545]: 2025-05-14 18:06:35.111 [INFO][4104] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" Namespace="calico-apiserver" Pod="calico-apiserver-5c66df7d94-5lcgl" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--apiserver--5c66df7d94--5lcgl-eth0" May 14 18:06:35.191020 containerd[1545]: time="2025-05-14T18:06:35.189155017Z" level=info msg="connecting to shim 46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d" address="unix:///run/containerd/s/7f105a70d2e1a248b92ecc4fe4f60662af10aee521e5ffc36770dca5d4778992" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:35.258342 systemd[1]: Started cri-containerd-46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d.scope - libcontainer container 46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d. May 14 18:06:35.298244 systemd-networkd[1440]: calidac75ac4e7c: Gained IPv6LL May 14 18:06:35.373473 containerd[1545]: time="2025-05-14T18:06:35.372522810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c66df7d94-5lcgl,Uid:253eda59-298c-4556-85be-196af6b421d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d\"" May 14 18:06:35.682183 systemd-networkd[1440]: caliad458f4e742: Gained IPv6LL May 14 18:06:36.095921 kubelet[2724]: E0514 18:06:36.094520 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:36.258386 systemd-networkd[1440]: calic29638d17ad: Gained IPv6LL May 14 18:06:36.811211 kubelet[2724]: E0514 18:06:36.811153 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:36.813134 containerd[1545]: time="2025-05-14T18:06:36.812948086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd7bcbdff-92nwd,Uid:5d3f6e28-37fa-44c3-a678-e3e913c44052,Namespace:calico-system,Attempt:0,}" May 14 18:06:36.816113 containerd[1545]: time="2025-05-14T18:06:36.816013129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khs2x,Uid:9fab24c1-843c-40db-97fd-96c58a50a664,Namespace:kube-system,Attempt:0,}" May 14 18:06:37.104095 kubelet[2724]: E0514 18:06:37.103297 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:37.188332 systemd-networkd[1440]: cali32023a19846: Link UP May 14 18:06:37.190002 systemd-networkd[1440]: cali32023a19846: Gained carrier May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:36.954 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0 calico-kube-controllers-6bd7bcbdff- calico-system 5d3f6e28-37fa-44c3-a678-e3e913c44052 741 0 2025-05-14 18:06:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bd7bcbdff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c calico-kube-controllers-6bd7bcbdff-92nwd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali32023a19846 [] []}} ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:36.955 [INFO][4236] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.040 [INFO][4261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" HandleID="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.075 [INFO][4261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" HandleID="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"calico-kube-controllers-6bd7bcbdff-92nwd", "timestamp":"2025-05-14 18:06:37.040650211 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.075 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.077 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.078 [INFO][4261] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.089 [INFO][4261] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.107 [INFO][4261] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.123 [INFO][4261] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.130 [INFO][4261] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.137 [INFO][4261] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.138 [INFO][4261] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.143 [INFO][4261] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5 May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.157 [INFO][4261] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.169 [INFO][4261] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.68/26] block=192.168.13.64/26 handle="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.170 [INFO][4261] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.68/26] handle="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.170 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:37.239105 containerd[1545]: 2025-05-14 18:06:37.170 [INFO][4261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.68/26] IPv6=[] ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" HandleID="k8s-pod-network.10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Workload="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.177 [INFO][4236] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0", GenerateName:"calico-kube-controllers-6bd7bcbdff-", Namespace:"calico-system", SelfLink:"", UID:"5d3f6e28-37fa-44c3-a678-e3e913c44052", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd7bcbdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"calico-kube-controllers-6bd7bcbdff-92nwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32023a19846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.177 [INFO][4236] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.68/32] ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.177 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32023a19846 ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.190 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.193 [INFO][4236] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0", GenerateName:"calico-kube-controllers-6bd7bcbdff-", Namespace:"calico-system", SelfLink:"", UID:"5d3f6e28-37fa-44c3-a678-e3e913c44052", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd7bcbdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5", Pod:"calico-kube-controllers-6bd7bcbdff-92nwd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali32023a19846", MAC:"ea:cf:70:76:df:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:37.239903 containerd[1545]: 2025-05-14 18:06:37.232 [INFO][4236] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" Namespace="calico-system" Pod="calico-kube-controllers-6bd7bcbdff-92nwd" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-calico--kube--controllers--6bd7bcbdff--92nwd-eth0" May 14 18:06:37.311481 containerd[1545]: time="2025-05-14T18:06:37.309683851Z" level=info msg="connecting to shim 10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5" address="unix:///run/containerd/s/959a4d0e8c9e0ba84e6a8d66d0f2f5d48dec7fa6060bd2801942874507be3548" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:37.352313 systemd-networkd[1440]: cali05ed5f20f68: Link UP May 14 18:06:37.356837 systemd-networkd[1440]: cali05ed5f20f68: Gained carrier May 14 18:06:37.411270 systemd[1]: Started cri-containerd-10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5.scope - libcontainer container 10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5. May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:36.954 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0 coredns-6f6b679f8f- kube-system 9fab24c1-843c-40db-97fd-96c58a50a664 732 0 2025-05-14 18:06:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c coredns-6f6b679f8f-khs2x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali05ed5f20f68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:36.954 [INFO][4233] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.066 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" HandleID="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.094 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" HandleID="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a7d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"coredns-6f6b679f8f-khs2x", "timestamp":"2025-05-14 18:06:37.066098137 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.094 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.170 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.170 [INFO][4266] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.194 [INFO][4266] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.225 [INFO][4266] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.247 [INFO][4266] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.252 [INFO][4266] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.263 [INFO][4266] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.263 [INFO][4266] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.269 [INFO][4266] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9 May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.289 [INFO][4266] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.319 [INFO][4266] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.69/26] block=192.168.13.64/26 handle="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.319 [INFO][4266] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.69/26] handle="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.319 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:37.424462 containerd[1545]: 2025-05-14 18:06:37.319 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.69/26] IPv6=[] ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" HandleID="k8s-pod-network.df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Workload="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.338 [INFO][4233] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9fab24c1-843c-40db-97fd-96c58a50a664", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"coredns-6f6b679f8f-khs2x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ed5f20f68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.341 [INFO][4233] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.69/32] ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.341 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05ed5f20f68 ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.358 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.366 [INFO][4233] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9fab24c1-843c-40db-97fd-96c58a50a664", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9", Pod:"coredns-6f6b679f8f-khs2x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ed5f20f68", MAC:"3e:29:6a:83:f6:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:37.425255 containerd[1545]: 2025-05-14 18:06:37.407 [INFO][4233] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" Namespace="kube-system" Pod="coredns-6f6b679f8f-khs2x" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-coredns--6f6b679f8f--khs2x-eth0" May 14 18:06:37.512577 containerd[1545]: time="2025-05-14T18:06:37.512504812Z" level=info msg="connecting to shim df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9" address="unix:///run/containerd/s/473567ab6c8c2bf6d33c58585ece89a1cca528ed174ffe1dbe93dcd7458eabc5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:37.563347 systemd[1]: Started cri-containerd-df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9.scope - libcontainer container df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9. May 14 18:06:37.568273 containerd[1545]: time="2025-05-14T18:06:37.568193753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd7bcbdff-92nwd,Uid:5d3f6e28-37fa-44c3-a678-e3e913c44052,Namespace:calico-system,Attempt:0,} returns sandbox id \"10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5\"" May 14 18:06:37.688090 containerd[1545]: time="2025-05-14T18:06:37.688018245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khs2x,Uid:9fab24c1-843c-40db-97fd-96c58a50a664,Namespace:kube-system,Attempt:0,} returns sandbox id \"df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9\"" May 14 18:06:37.689265 kubelet[2724]: E0514 18:06:37.689184 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:37.698537 containerd[1545]: time="2025-05-14T18:06:37.698475709Z" level=info msg="CreateContainer within sandbox \"df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:06:37.731599 containerd[1545]: time="2025-05-14T18:06:37.731537762Z" level=info msg="Container 58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:37.756920 containerd[1545]: time="2025-05-14T18:06:37.756736574Z" level=info msg="CreateContainer within sandbox \"df03940df7af48de8166ea16e240aec429648711cbe2e6a8f2aa19ce4ee9bae9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa\"" May 14 18:06:37.758146 containerd[1545]: time="2025-05-14T18:06:37.758087790Z" level=info msg="StartContainer for \"58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa\"" May 14 18:06:37.760334 containerd[1545]: time="2025-05-14T18:06:37.760011942Z" level=info msg="connecting to shim 58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa" address="unix:///run/containerd/s/473567ab6c8c2bf6d33c58585ece89a1cca528ed174ffe1dbe93dcd7458eabc5" protocol=ttrpc version=3 May 14 18:06:37.806820 systemd[1]: Started cri-containerd-58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa.scope - libcontainer container 58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa. May 14 18:06:37.818760 containerd[1545]: time="2025-05-14T18:06:37.817089533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kb4r2,Uid:84f10dc4-cc8f-4f62-914c-3e3369d05915,Namespace:calico-system,Attempt:0,}" May 14 18:06:37.927545 containerd[1545]: time="2025-05-14T18:06:37.927472819Z" level=info msg="StartContainer for \"58aeda9eabb5c3d5f5bb1ab45c110d25663ff70681a2861d38c3251cc1fd5ffa\" returns successfully" May 14 18:06:38.130191 kubelet[2724]: E0514 18:06:38.128789 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:38.153305 kubelet[2724]: E0514 18:06:38.153080 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:38.171396 kubelet[2724]: I0514 18:06:38.171269 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-khs2x" podStartSLOduration=35.171240656 podStartE2EDuration="35.171240656s" podCreationTimestamp="2025-05-14 18:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:06:38.170490597 +0000 UTC m=+38.612929606" watchObservedRunningTime="2025-05-14 18:06:38.171240656 +0000 UTC m=+38.613679617" May 14 18:06:38.252461 systemd-networkd[1440]: cali2e9eb38060f: Link UP May 14 18:06:38.254397 systemd-networkd[1440]: cali2e9eb38060f: Gained carrier May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.009 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0 csi-node-driver- calico-system 84f10dc4-cc8f-4f62-914c-3e3369d05915 648 0 2025-05-14 18:06:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334.0.0-a-4c74b6421c csi-node-driver-kb4r2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e9eb38060f [] []}} ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.009 [INFO][4418] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.080 [INFO][4447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" HandleID="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Workload="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.110 [INFO][4447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" HandleID="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Workload="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033cd50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334.0.0-a-4c74b6421c", "pod":"csi-node-driver-kb4r2", "timestamp":"2025-05-14 18:06:38.080307068 +0000 UTC"}, Hostname:"ci-4334.0.0-a-4c74b6421c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.110 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.110 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.111 [INFO][4447] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334.0.0-a-4c74b6421c' May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.118 [INFO][4447] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.128 [INFO][4447] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.158 [INFO][4447] ipam/ipam.go 489: Trying affinity for 192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.165 [INFO][4447] ipam/ipam.go 155: Attempting to load block cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.174 [INFO][4447] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.175 [INFO][4447] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.188 [INFO][4447] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35 May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.202 [INFO][4447] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.226 [INFO][4447] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.13.70/26] block=192.168.13.64/26 handle="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.228 [INFO][4447] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.13.70/26] handle="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" host="ci-4334.0.0-a-4c74b6421c" May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.229 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:06:38.305275 containerd[1545]: 2025-05-14 18:06:38.229 [INFO][4447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.70/26] IPv6=[] ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" HandleID="k8s-pod-network.d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Workload="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.242 [INFO][4418] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84f10dc4-cc8f-4f62-914c-3e3369d05915", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"", Pod:"csi-node-driver-kb4r2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e9eb38060f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.242 [INFO][4418] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.13.70/32] ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.242 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e9eb38060f ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.254 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.259 [INFO][4418] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84f10dc4-cc8f-4f62-914c-3e3369d05915", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334.0.0-a-4c74b6421c", ContainerID:"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35", Pod:"csi-node-driver-kb4r2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e9eb38060f", MAC:"52:99:e2:7e:d4:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:06:38.307749 containerd[1545]: 2025-05-14 18:06:38.289 [INFO][4418] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" Namespace="calico-system" Pod="csi-node-driver-kb4r2" WorkloadEndpoint="ci--4334.0.0--a--4c74b6421c-k8s-csi--node--driver--kb4r2-eth0" May 14 18:06:38.390127 containerd[1545]: time="2025-05-14T18:06:38.389118564Z" level=info msg="connecting to shim d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35" address="unix:///run/containerd/s/0c737662c2bf4f877f93a49e434eddf14ecd59fe48e197a8975a3246c3da0e4b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:06:38.459108 containerd[1545]: time="2025-05-14T18:06:38.458802969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 18:06:38.469456 containerd[1545]: time="2025-05-14T18:06:38.469391289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:38.471774 systemd[1]: Started cri-containerd-d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35.scope - libcontainer container d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35. May 14 18:06:38.483208 containerd[1545]: time="2025-05-14T18:06:38.481327775Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:38.487029 containerd[1545]: time="2025-05-14T18:06:38.486123394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:38.487632 containerd[1545]: time="2025-05-14T18:06:38.487002169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.70547793s" May 14 18:06:38.487873 containerd[1545]: time="2025-05-14T18:06:38.487831564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:06:38.492858 containerd[1545]: time="2025-05-14T18:06:38.492614015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:06:38.500717 containerd[1545]: time="2025-05-14T18:06:38.500217305Z" level=info msg="CreateContainer within sandbox \"614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:06:38.514016 containerd[1545]: time="2025-05-14T18:06:38.512184316Z" level=info msg="Container 1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:38.529664 containerd[1545]: time="2025-05-14T18:06:38.529603591Z" level=info msg="CreateContainer within sandbox \"614291c8bb51d4ee8c8ce0d138f2c68257da83fb64be8778380deba88dc85c1f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f\"" May 14 18:06:38.532607 containerd[1545]: time="2025-05-14T18:06:38.532515121Z" level=info msg="StartContainer for \"1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f\"" May 14 18:06:38.536235 containerd[1545]: time="2025-05-14T18:06:38.536101131Z" level=info msg="connecting to shim 1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f" address="unix:///run/containerd/s/b524fdf45e7153675f70608c3c7aaa74b311e41e5a0d14cac75eb207f39388eb" protocol=ttrpc version=3 May 14 18:06:38.561220 containerd[1545]: time="2025-05-14T18:06:38.560584003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kb4r2,Uid:84f10dc4-cc8f-4f62-914c-3e3369d05915,Namespace:calico-system,Attempt:0,} returns sandbox id \"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35\"" May 14 18:06:38.601330 systemd[1]: Started cri-containerd-1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f.scope - libcontainer container 1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f. May 14 18:06:38.686988 containerd[1545]: time="2025-05-14T18:06:38.686922042Z" level=info msg="StartContainer for \"1ff6f49781e40e88dc4c81332308250d16b791ce2dcae238e11675e868ae3b6f\" returns successfully" May 14 18:06:38.975598 containerd[1545]: time="2025-05-14T18:06:38.975392975Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:38.977435 containerd[1545]: time="2025-05-14T18:06:38.977008981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:06:38.979938 containerd[1545]: time="2025-05-14T18:06:38.979887521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 486.622691ms" May 14 18:06:38.980152 containerd[1545]: time="2025-05-14T18:06:38.980133971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:06:38.984164 containerd[1545]: time="2025-05-14T18:06:38.984130042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:06:38.986250 containerd[1545]: time="2025-05-14T18:06:38.986188461Z" level=info msg="CreateContainer within sandbox \"46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:06:38.993340 containerd[1545]: time="2025-05-14T18:06:38.993131113Z" level=info msg="Container 9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:39.015594 containerd[1545]: time="2025-05-14T18:06:39.015499555Z" level=info msg="CreateContainer within sandbox \"46f33d359bb1d433631957cbff0cfb547d76a9ea14d0b0d0989848b7fbe99b8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d\"" May 14 18:06:39.016524 containerd[1545]: time="2025-05-14T18:06:39.016486154Z" level=info msg="StartContainer for \"9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d\"" May 14 18:06:39.018881 containerd[1545]: time="2025-05-14T18:06:39.018684820Z" level=info msg="connecting to shim 9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d" address="unix:///run/containerd/s/7f105a70d2e1a248b92ecc4fe4f60662af10aee521e5ffc36770dca5d4778992" protocol=ttrpc version=3 May 14 18:06:39.067234 systemd[1]: Started cri-containerd-9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d.scope - libcontainer container 9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d. May 14 18:06:39.154805 containerd[1545]: time="2025-05-14T18:06:39.154695098Z" level=info msg="StartContainer for \"9f20337a705d41a426ff44897f8d6962beead8b474546411af9f51e738a5153d\" returns successfully" May 14 18:06:39.170149 kubelet[2724]: E0514 18:06:39.169822 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:39.184319 kubelet[2724]: I0514 18:06:39.183704 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c66df7d94-gk6v7" podStartSLOduration=25.466143021 podStartE2EDuration="29.183681234s" podCreationTimestamp="2025-05-14 18:06:10 +0000 UTC" firstStartedPulling="2025-05-14 18:06:34.773943059 +0000 UTC m=+35.216382012" lastFinishedPulling="2025-05-14 18:06:38.491481275 +0000 UTC m=+38.933920225" observedRunningTime="2025-05-14 18:06:39.182708764 +0000 UTC m=+39.625147722" watchObservedRunningTime="2025-05-14 18:06:39.183681234 +0000 UTC m=+39.626120191" May 14 18:06:39.202223 systemd-networkd[1440]: cali32023a19846: Gained IPv6LL May 14 18:06:39.202569 systemd-networkd[1440]: cali05ed5f20f68: Gained IPv6LL May 14 18:06:39.205257 kubelet[2724]: I0514 18:06:39.204968 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c66df7d94-5lcgl" podStartSLOduration=25.598797701 podStartE2EDuration="29.20449155s" podCreationTimestamp="2025-05-14 18:06:10 +0000 UTC" firstStartedPulling="2025-05-14 18:06:35.376893492 +0000 UTC m=+35.819332431" lastFinishedPulling="2025-05-14 18:06:38.982587325 +0000 UTC m=+39.425026280" observedRunningTime="2025-05-14 18:06:39.201587779 +0000 UTC m=+39.644026736" watchObservedRunningTime="2025-05-14 18:06:39.20449155 +0000 UTC m=+39.646930487" May 14 18:06:40.177304 kubelet[2724]: I0514 18:06:40.176492 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:40.177304 kubelet[2724]: E0514 18:06:40.176776 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:06:40.177304 kubelet[2724]: I0514 18:06:40.176866 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:40.290246 systemd-networkd[1440]: cali2e9eb38060f: Gained IPv6LL May 14 18:06:41.568408 containerd[1545]: time="2025-05-14T18:06:41.568333892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:41.570123 containerd[1545]: time="2025-05-14T18:06:41.570043637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 18:06:41.570837 containerd[1545]: time="2025-05-14T18:06:41.570758172Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:41.574480 containerd[1545]: time="2025-05-14T18:06:41.574399316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.590007688s" May 14 18:06:41.574818 containerd[1545]: time="2025-05-14T18:06:41.574449950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 18:06:41.575097 containerd[1545]: time="2025-05-14T18:06:41.574914949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:41.578325 containerd[1545]: time="2025-05-14T18:06:41.578172658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:06:41.609314 containerd[1545]: time="2025-05-14T18:06:41.609277720Z" level=info msg="CreateContainer within sandbox \"10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:06:41.639007 containerd[1545]: time="2025-05-14T18:06:41.638472236Z" level=info msg="Container e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:41.645045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35721709.mount: Deactivated successfully. May 14 18:06:41.658498 containerd[1545]: time="2025-05-14T18:06:41.658453327Z" level=info msg="CreateContainer within sandbox \"10492f0b8f5d7c4fb20256e5ce3f516ea76f66bcb23cb35fbe411034228674f5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\"" May 14 18:06:41.659494 containerd[1545]: time="2025-05-14T18:06:41.659456120Z" level=info msg="StartContainer for \"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\"" May 14 18:06:41.661302 containerd[1545]: time="2025-05-14T18:06:41.661246757Z" level=info msg="connecting to shim e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b" address="unix:///run/containerd/s/959a4d0e8c9e0ba84e6a8d66d0f2f5d48dec7fa6060bd2801942874507be3548" protocol=ttrpc version=3 May 14 18:06:41.710326 systemd[1]: Started cri-containerd-e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b.scope - libcontainer container e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b. May 14 18:06:41.836469 containerd[1545]: time="2025-05-14T18:06:41.835953267Z" level=info msg="StartContainer for \"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\" returns successfully" May 14 18:06:42.266473 containerd[1545]: time="2025-05-14T18:06:42.266422274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\" id:\"716ccf5bf973edbcc0da97ffa54fe3d96b50458f723e070074683324f29e04ee\" pid:4641 exited_at:{seconds:1747246002 nanos:265966876}" May 14 18:06:42.286085 kubelet[2724]: I0514 18:06:42.286018 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bd7bcbdff-92nwd" podStartSLOduration=27.279652657 podStartE2EDuration="31.285959562s" podCreationTimestamp="2025-05-14 18:06:11 +0000 UTC" firstStartedPulling="2025-05-14 18:06:37.570681212 +0000 UTC m=+38.013120162" lastFinishedPulling="2025-05-14 18:06:41.576988129 +0000 UTC m=+42.019427067" observedRunningTime="2025-05-14 18:06:42.223252752 +0000 UTC m=+42.665691726" watchObservedRunningTime="2025-05-14 18:06:42.285959562 +0000 UTC m=+42.728398516" May 14 18:06:42.998110 containerd[1545]: time="2025-05-14T18:06:42.997836818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:42.999391 containerd[1545]: time="2025-05-14T18:06:42.999118668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 18:06:43.000553 containerd[1545]: time="2025-05-14T18:06:43.000438318Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:43.003256 containerd[1545]: time="2025-05-14T18:06:43.003201446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:43.004258 containerd[1545]: time="2025-05-14T18:06:43.004207206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.425959525s" May 14 18:06:43.004440 containerd[1545]: time="2025-05-14T18:06:43.004421126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 18:06:43.008695 containerd[1545]: time="2025-05-14T18:06:43.008630188Z" level=info msg="CreateContainer within sandbox \"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:06:43.040593 containerd[1545]: time="2025-05-14T18:06:43.038403728Z" level=info msg="Container e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:43.076356 containerd[1545]: time="2025-05-14T18:06:43.076258089Z" level=info msg="CreateContainer within sandbox \"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8\"" May 14 18:06:43.077556 containerd[1545]: time="2025-05-14T18:06:43.077503331Z" level=info msg="StartContainer for \"e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8\"" May 14 18:06:43.081888 containerd[1545]: time="2025-05-14T18:06:43.081824710Z" level=info msg="connecting to shim e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8" address="unix:///run/containerd/s/0c737662c2bf4f877f93a49e434eddf14ecd59fe48e197a8975a3246c3da0e4b" protocol=ttrpc version=3 May 14 18:06:43.125311 systemd[1]: Started cri-containerd-e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8.scope - libcontainer container e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8. May 14 18:06:43.234584 containerd[1545]: time="2025-05-14T18:06:43.234421931Z" level=info msg="StartContainer for \"e0d29286d218dd87a6809087c99350cefcf1509f331834f69411c6aa8d3730d8\" returns successfully" May 14 18:06:43.238853 containerd[1545]: time="2025-05-14T18:06:43.238786756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:06:45.062104 systemd[1]: Started sshd@12-165.232.128.115:22-139.178.89.65:37500.service - OpenSSH per-connection server daemon (139.178.89.65:37500). May 14 18:06:45.258396 sshd[4697]: Accepted publickey for core from 139.178.89.65 port 37500 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:06:45.263332 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:45.286528 systemd-logind[1517]: New session 10 of user core. May 14 18:06:45.295397 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:06:46.097843 sshd[4700]: Connection closed by 139.178.89.65 port 37500 May 14 18:06:46.099061 sshd-session[4697]: pam_unix(sshd:session): session closed for user core May 14 18:06:46.110219 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. May 14 18:06:46.110383 systemd[1]: sshd@12-165.232.128.115:22-139.178.89.65:37500.service: Deactivated successfully. May 14 18:06:46.118932 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:06:46.130900 systemd-logind[1517]: Removed session 10. May 14 18:06:46.185846 containerd[1545]: time="2025-05-14T18:06:46.185768397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:46.187183 containerd[1545]: time="2025-05-14T18:06:46.186508916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 18:06:46.188269 containerd[1545]: time="2025-05-14T18:06:46.188228935Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:46.192579 containerd[1545]: time="2025-05-14T18:06:46.192502490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:06:46.193636 containerd[1545]: time="2025-05-14T18:06:46.193580774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.954727867s" May 14 18:06:46.193636 containerd[1545]: time="2025-05-14T18:06:46.193635579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 18:06:46.199448 containerd[1545]: time="2025-05-14T18:06:46.199382096Z" level=info msg="CreateContainer within sandbox \"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:06:46.219007 containerd[1545]: time="2025-05-14T18:06:46.218872494Z" level=info msg="Container 239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310: CDI devices from CRI Config.CDIDevices: []" May 14 18:06:46.296241 containerd[1545]: time="2025-05-14T18:06:46.296173536Z" level=info msg="CreateContainer within sandbox \"d89bfc48f48c9103baab7588f1e5ff01f6dc68ce7163d5d0a385577c458c9f35\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310\"" May 14 18:06:46.303187 containerd[1545]: time="2025-05-14T18:06:46.303108263Z" level=info msg="StartContainer for \"239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310\"" May 14 18:06:46.305447 containerd[1545]: time="2025-05-14T18:06:46.305368730Z" level=info msg="connecting to shim 239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310" address="unix:///run/containerd/s/0c737662c2bf4f877f93a49e434eddf14ecd59fe48e197a8975a3246c3da0e4b" protocol=ttrpc version=3 May 14 18:06:46.356445 systemd[1]: Started cri-containerd-239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310.scope - libcontainer container 239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310. May 14 18:06:46.482165 containerd[1545]: time="2025-05-14T18:06:46.481786340Z" level=info msg="StartContainer for \"239e967b457995526125ec6eac00a028102fb2f4f14ec7117ccb43d0e728c310\" returns successfully" May 14 18:06:47.307648 kubelet[2724]: I0514 18:06:47.307473 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:06:47.312933 kubelet[2724]: I0514 18:06:47.312869 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:06:51.123056 systemd[1]: Started sshd@13-165.232.128.115:22-139.178.89.65:41490.service - OpenSSH per-connection server daemon (139.178.89.65:41490). May 14 18:06:51.214193 sshd[4751]: Accepted publickey for core from 139.178.89.65 port 41490 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:06:51.216586 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:51.223516 systemd-logind[1517]: New session 11 of user core. May 14 18:06:51.232369 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:06:51.478458 sshd[4753]: Connection closed by 139.178.89.65 port 41490 May 14 18:06:51.479648 sshd-session[4751]: pam_unix(sshd:session): session closed for user core May 14 18:06:51.488939 systemd[1]: sshd@13-165.232.128.115:22-139.178.89.65:41490.service: Deactivated successfully. May 14 18:06:51.495324 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:06:51.498256 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. May 14 18:06:51.501956 systemd-logind[1517]: Removed session 11. May 14 18:06:52.282647 kubelet[2724]: I0514 18:06:52.282196 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:06:52.317417 kubelet[2724]: I0514 18:06:52.316198 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kb4r2" podStartSLOduration=33.728988439 podStartE2EDuration="41.316176148s" podCreationTimestamp="2025-05-14 18:06:11 +0000 UTC" firstStartedPulling="2025-05-14 18:06:38.609282249 +0000 UTC m=+39.051721185" lastFinishedPulling="2025-05-14 18:06:46.196469955 +0000 UTC m=+46.638908894" observedRunningTime="2025-05-14 18:06:47.24968131 +0000 UTC m=+47.692120261" watchObservedRunningTime="2025-05-14 18:06:52.316176148 +0000 UTC m=+52.758615106" May 14 18:06:56.504374 systemd[1]: Started sshd@14-165.232.128.115:22-139.178.89.65:36268.service - OpenSSH per-connection server daemon (139.178.89.65:36268). May 14 18:06:56.602157 sshd[4777]: Accepted publickey for core from 139.178.89.65 port 36268 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:06:56.604844 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:56.616328 systemd-logind[1517]: New session 12 of user core. May 14 18:06:56.624435 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:06:56.814119 sshd[4779]: Connection closed by 139.178.89.65 port 36268 May 14 18:06:56.815344 sshd-session[4777]: pam_unix(sshd:session): session closed for user core May 14 18:06:56.828404 systemd[1]: sshd@14-165.232.128.115:22-139.178.89.65:36268.service: Deactivated successfully. May 14 18:06:56.832832 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:06:56.835117 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. May 14 18:06:56.840426 systemd[1]: Started sshd@15-165.232.128.115:22-139.178.89.65:36280.service - OpenSSH per-connection server daemon (139.178.89.65:36280). May 14 18:06:56.843445 systemd-logind[1517]: Removed session 12. May 14 18:06:56.908967 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 36280 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:06:56.911433 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:56.921531 systemd-logind[1517]: New session 13 of user core. May 14 18:06:56.935297 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:06:57.174732 sshd[4793]: Connection closed by 139.178.89.65 port 36280 May 14 18:06:57.176561 sshd-session[4791]: pam_unix(sshd:session): session closed for user core May 14 18:06:57.191932 systemd[1]: sshd@15-165.232.128.115:22-139.178.89.65:36280.service: Deactivated successfully. May 14 18:06:57.196627 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:06:57.204068 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. May 14 18:06:57.209130 systemd[1]: Started sshd@16-165.232.128.115:22-139.178.89.65:36288.service - OpenSSH per-connection server daemon (139.178.89.65:36288). May 14 18:06:57.214796 systemd-logind[1517]: Removed session 13. May 14 18:06:57.331862 sshd[4803]: Accepted publickey for core from 139.178.89.65 port 36288 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:06:57.336896 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:57.347220 systemd-logind[1517]: New session 14 of user core. May 14 18:06:57.354372 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:06:57.540849 sshd[4805]: Connection closed by 139.178.89.65 port 36288 May 14 18:06:57.543295 sshd-session[4803]: pam_unix(sshd:session): session closed for user core May 14 18:06:57.549162 systemd[1]: sshd@16-165.232.128.115:22-139.178.89.65:36288.service: Deactivated successfully. May 14 18:06:57.553259 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:06:57.556193 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. May 14 18:06:57.559221 systemd-logind[1517]: Removed session 14. May 14 18:06:58.485946 containerd[1545]: time="2025-05-14T18:06:58.485821436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\" id:\"6416fc34149c8dc80203fc7bb331082e270a2e20d186a29c84f921e3af056210\" pid:4829 exited_at:{seconds:1747246018 nanos:485280256}" May 14 18:07:00.794010 containerd[1545]: time="2025-05-14T18:07:00.793926860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\" id:\"12d0855afd5f6bacff55679f5530803a0e7d1172f66a12daf8a8424669460cd4\" pid:4860 exit_status:1 exited_at:{seconds:1747246020 nanos:793243921}" May 14 18:07:02.559885 systemd[1]: Started sshd@17-165.232.128.115:22-139.178.89.65:36290.service - OpenSSH per-connection server daemon (139.178.89.65:36290). May 14 18:07:02.702763 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 36290 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:02.704828 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:02.715539 systemd-logind[1517]: New session 15 of user core. May 14 18:07:02.722355 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:07:02.995402 sshd[4876]: Connection closed by 139.178.89.65 port 36290 May 14 18:07:02.996644 sshd-session[4874]: pam_unix(sshd:session): session closed for user core May 14 18:07:03.009499 systemd[1]: sshd@17-165.232.128.115:22-139.178.89.65:36290.service: Deactivated successfully. May 14 18:07:03.013130 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:07:03.015205 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. May 14 18:07:03.017306 systemd-logind[1517]: Removed session 15. May 14 18:07:04.982056 kubelet[2724]: I0514 18:07:04.981849 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:07:06.379170 systemd[1]: Started sshd@18-165.232.128.115:22-185.233.247.245:60722.service - OpenSSH per-connection server daemon (185.233.247.245:60722). May 14 18:07:06.812150 sshd[4894]: Connection closed by 185.233.247.245 port 60722 [preauth] May 14 18:07:06.815155 systemd[1]: sshd@18-165.232.128.115:22-185.233.247.245:60722.service: Deactivated successfully. May 14 18:07:08.029551 systemd[1]: Started sshd@19-165.232.128.115:22-139.178.89.65:60864.service - OpenSSH per-connection server daemon (139.178.89.65:60864). May 14 18:07:08.136056 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 60864 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:08.140706 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:08.150960 systemd-logind[1517]: New session 16 of user core. May 14 18:07:08.156294 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:07:08.360654 sshd[4901]: Connection closed by 139.178.89.65 port 60864 May 14 18:07:08.362114 sshd-session[4899]: pam_unix(sshd:session): session closed for user core May 14 18:07:08.369908 systemd[1]: sshd@19-165.232.128.115:22-139.178.89.65:60864.service: Deactivated successfully. May 14 18:07:08.375885 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:07:08.378348 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. May 14 18:07:08.382075 systemd-logind[1517]: Removed session 16. May 14 18:07:10.811205 kubelet[2724]: E0514 18:07:10.811027 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:07:12.523206 systemd[1]: Started sshd@20-165.232.128.115:22-45.249.8.86:40894.service - OpenSSH per-connection server daemon (45.249.8.86:40894). May 14 18:07:13.198491 sshd[4920]: Connection closed by 45.249.8.86 port 40894 [preauth] May 14 18:07:13.202084 systemd[1]: sshd@20-165.232.128.115:22-45.249.8.86:40894.service: Deactivated successfully. May 14 18:07:13.386336 systemd[1]: Started sshd@21-165.232.128.115:22-139.178.89.65:60872.service - OpenSSH per-connection server daemon (139.178.89.65:60872). May 14 18:07:13.515776 sshd[4925]: Accepted publickey for core from 139.178.89.65 port 60872 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:13.521495 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:13.532934 systemd-logind[1517]: New session 17 of user core. May 14 18:07:13.539364 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:07:13.846682 sshd[4927]: Connection closed by 139.178.89.65 port 60872 May 14 18:07:13.847387 sshd-session[4925]: pam_unix(sshd:session): session closed for user core May 14 18:07:13.858085 systemd[1]: sshd@21-165.232.128.115:22-139.178.89.65:60872.service: Deactivated successfully. May 14 18:07:13.858603 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. May 14 18:07:13.865721 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:07:13.872767 systemd-logind[1517]: Removed session 17. May 14 18:07:18.865189 systemd[1]: Started sshd@22-165.232.128.115:22-139.178.89.65:35882.service - OpenSSH per-connection server daemon (139.178.89.65:35882). May 14 18:07:18.956028 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 35882 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:18.958617 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:18.965447 systemd-logind[1517]: New session 18 of user core. May 14 18:07:18.975364 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:07:19.147471 sshd[4943]: Connection closed by 139.178.89.65 port 35882 May 14 18:07:19.148299 sshd-session[4941]: pam_unix(sshd:session): session closed for user core May 14 18:07:19.165775 systemd[1]: sshd@22-165.232.128.115:22-139.178.89.65:35882.service: Deactivated successfully. May 14 18:07:19.168897 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:07:19.170566 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. May 14 18:07:19.176185 systemd[1]: Started sshd@23-165.232.128.115:22-139.178.89.65:35894.service - OpenSSH per-connection server daemon (139.178.89.65:35894). May 14 18:07:19.178735 systemd-logind[1517]: Removed session 18. May 14 18:07:19.262121 sshd[4955]: Accepted publickey for core from 139.178.89.65 port 35894 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:19.264347 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:19.271570 systemd-logind[1517]: New session 19 of user core. May 14 18:07:19.279288 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:07:19.643818 sshd[4957]: Connection closed by 139.178.89.65 port 35894 May 14 18:07:19.649960 sshd-session[4955]: pam_unix(sshd:session): session closed for user core May 14 18:07:19.665670 systemd[1]: Started sshd@24-165.232.128.115:22-139.178.89.65:35908.service - OpenSSH per-connection server daemon (139.178.89.65:35908). May 14 18:07:19.683653 systemd[1]: sshd@23-165.232.128.115:22-139.178.89.65:35894.service: Deactivated successfully. May 14 18:07:19.693858 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:07:19.696404 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. May 14 18:07:19.699838 systemd-logind[1517]: Removed session 19. May 14 18:07:19.838080 sshd[4963]: Accepted publickey for core from 139.178.89.65 port 35908 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:19.840948 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:19.849694 systemd-logind[1517]: New session 20 of user core. May 14 18:07:19.857358 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:07:21.603696 systemd[1]: Started sshd@25-165.232.128.115:22-193.32.162.38:42626.service - OpenSSH per-connection server daemon (193.32.162.38:42626). May 14 18:07:22.482834 sshd[4977]: Connection closed by authenticating user root 193.32.162.38 port 42626 [preauth] May 14 18:07:22.486596 systemd[1]: sshd@25-165.232.128.115:22-193.32.162.38:42626.service: Deactivated successfully. May 14 18:07:22.498787 sshd[4968]: Connection closed by 139.178.89.65 port 35908 May 14 18:07:22.500300 sshd-session[4963]: pam_unix(sshd:session): session closed for user core May 14 18:07:22.514333 systemd[1]: sshd@24-165.232.128.115:22-139.178.89.65:35908.service: Deactivated successfully. May 14 18:07:22.519625 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:07:22.519942 systemd[1]: session-20.scope: Consumed 808ms CPU time, 68.7M memory peak. May 14 18:07:22.523455 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. May 14 18:07:22.535286 systemd[1]: Started sshd@26-165.232.128.115:22-139.178.89.65:35920.service - OpenSSH per-connection server daemon (139.178.89.65:35920). May 14 18:07:22.538327 systemd-logind[1517]: Removed session 20. May 14 18:07:22.637740 sshd[4987]: Accepted publickey for core from 139.178.89.65 port 35920 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:22.640630 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:22.649331 systemd-logind[1517]: New session 21 of user core. May 14 18:07:22.658366 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:07:23.478740 sshd[4991]: Connection closed by 139.178.89.65 port 35920 May 14 18:07:23.479652 sshd-session[4987]: pam_unix(sshd:session): session closed for user core May 14 18:07:23.501167 systemd[1]: sshd@26-165.232.128.115:22-139.178.89.65:35920.service: Deactivated successfully. May 14 18:07:23.508460 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:07:23.512394 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. May 14 18:07:23.518927 systemd[1]: Started sshd@27-165.232.128.115:22-139.178.89.65:35934.service - OpenSSH per-connection server daemon (139.178.89.65:35934). May 14 18:07:23.522102 systemd-logind[1517]: Removed session 21. May 14 18:07:23.592489 sshd[5002]: Accepted publickey for core from 139.178.89.65 port 35934 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:23.595779 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:23.608531 systemd-logind[1517]: New session 22 of user core. May 14 18:07:23.615611 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:07:23.804702 sshd[5004]: Connection closed by 139.178.89.65 port 35934 May 14 18:07:23.806128 sshd-session[5002]: pam_unix(sshd:session): session closed for user core May 14 18:07:23.810331 kubelet[2724]: E0514 18:07:23.809924 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:07:23.819961 systemd[1]: sshd@27-165.232.128.115:22-139.178.89.65:35934.service: Deactivated successfully. May 14 18:07:23.826247 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:07:23.827539 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. May 14 18:07:23.830853 systemd-logind[1517]: Removed session 22. May 14 18:07:24.997850 containerd[1545]: time="2025-05-14T18:07:24.997777914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\" id:\"2affc83b6d757563a27964fd3f46629c8be777f2b0335034d95bc14582dba221\" pid:5027 exited_at:{seconds:1747246044 nanos:997185144}" May 14 18:07:28.473438 containerd[1545]: time="2025-05-14T18:07:28.473372194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9d2bab972c2464b99d5486c81acdf12c89a734b8c393293889713c30d17340b\" id:\"2faedb872ce47a2da9b9d01479dd43891f1b0299aad891bfe8ffa6b47943cf1a\" pid:5051 exited_at:{seconds:1747246048 nanos:472922398}" May 14 18:07:28.821793 systemd[1]: Started sshd@28-165.232.128.115:22-139.178.89.65:42360.service - OpenSSH per-connection server daemon (139.178.89.65:42360). May 14 18:07:28.891702 sshd[5061]: Accepted publickey for core from 139.178.89.65 port 42360 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:28.894362 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:28.902121 systemd-logind[1517]: New session 23 of user core. May 14 18:07:28.916364 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:07:29.109430 sshd[5063]: Connection closed by 139.178.89.65 port 42360 May 14 18:07:29.111907 sshd-session[5061]: pam_unix(sshd:session): session closed for user core May 14 18:07:29.117836 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. May 14 18:07:29.120091 systemd[1]: sshd@28-165.232.128.115:22-139.178.89.65:42360.service: Deactivated successfully. May 14 18:07:29.124908 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:07:29.129254 systemd-logind[1517]: Removed session 23. May 14 18:07:30.776620 containerd[1545]: time="2025-05-14T18:07:30.776487651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e02bfca41a1468af4e9e3ee327f04fc937030d62d91ba8219c7341fe8a18490\" id:\"21eda3a9c5d2ec07ec495e7f19a67c31c7fd2247d75333df1469d734b900b99f\" pid:5086 exited_at:{seconds:1747246050 nanos:775900074}" May 14 18:07:30.782049 kubelet[2724]: E0514 18:07:30.781505 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:07:31.810817 kubelet[2724]: E0514 18:07:31.810757 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:07:33.811118 kubelet[2724]: E0514 18:07:33.809905 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 14 18:07:34.126341 systemd[1]: Started sshd@29-165.232.128.115:22-139.178.89.65:42364.service - OpenSSH per-connection server daemon (139.178.89.65:42364). May 14 18:07:34.220015 sshd[5099]: Accepted publickey for core from 139.178.89.65 port 42364 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:34.226746 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:34.240068 systemd-logind[1517]: New session 24 of user core. May 14 18:07:34.250842 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:07:34.551625 sshd[5101]: Connection closed by 139.178.89.65 port 42364 May 14 18:07:34.556986 sshd-session[5099]: pam_unix(sshd:session): session closed for user core May 14 18:07:34.570172 systemd[1]: sshd@29-165.232.128.115:22-139.178.89.65:42364.service: Deactivated successfully. May 14 18:07:34.577254 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:07:34.579523 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. May 14 18:07:34.584407 systemd-logind[1517]: Removed session 24. May 14 18:07:39.570923 systemd[1]: Started sshd@30-165.232.128.115:22-139.178.89.65:52870.service - OpenSSH per-connection server daemon (139.178.89.65:52870). May 14 18:07:39.699352 sshd[5114]: Accepted publickey for core from 139.178.89.65 port 52870 ssh2: RSA SHA256:I6v7602y95t0HxsKZunlpQRdbWqTS6jK7hLc8ah5Xaw May 14 18:07:39.703636 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:39.715314 systemd-logind[1517]: New session 25 of user core. May 14 18:07:39.720247 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:07:39.871623 systemd[1]: Started sshd@31-165.232.128.115:22-185.233.247.245:42278.service - OpenSSH per-connection server daemon (185.233.247.245:42278). May 14 18:07:40.022019 sshd[5116]: Connection closed by 139.178.89.65 port 52870 May 14 18:07:40.022916 sshd-session[5114]: pam_unix(sshd:session): session closed for user core May 14 18:07:40.031082 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. May 14 18:07:40.031481 systemd[1]: sshd@30-165.232.128.115:22-139.178.89.65:52870.service: Deactivated successfully. May 14 18:07:40.036090 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:07:40.038961 systemd-logind[1517]: Removed session 25. May 14 18:07:40.282012 sshd[5124]: Connection closed by 185.233.247.245 port 42278 [preauth] May 14 18:07:40.285992 systemd[1]: sshd@31-165.232.128.115:22-185.233.247.245:42278.service: Deactivated successfully. May 14 18:07:41.554312 systemd[1]: Started sshd@32-165.232.128.115:22-45.79.181.223:19376.service - OpenSSH per-connection server daemon (45.79.181.223:19376). May 14 18:07:42.603744 sshd[5133]: Connection closed by 45.79.181.223 port 19376 [preauth] May 14 18:07:42.605285 systemd[1]: sshd@32-165.232.128.115:22-45.79.181.223:19376.service: Deactivated successfully. May 14 18:07:42.713940 systemd[1]: Started sshd@33-165.232.128.115:22-45.79.181.223:19392.service - OpenSSH per-connection server daemon (45.79.181.223:19392).