May 13 23:52:38.848568 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:52:38.848591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:52:38.848603 kernel: BIOS-provided physical RAM map: May 13 23:52:38.848610 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 23:52:38.848616 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 23:52:38.848622 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 23:52:38.848629 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 13 23:52:38.848636 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 13 23:52:38.848642 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:52:38.848649 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 23:52:38.848657 kernel: NX (Execute Disable) protection: active May 13 23:52:38.848664 kernel: APIC: Static calls initialized May 13 23:52:38.848674 kernel: SMBIOS 2.8 present. May 13 23:52:38.848684 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 13 23:52:38.848698 kernel: Hypervisor detected: KVM May 13 23:52:38.848708 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:52:38.848725 kernel: kvm-clock: using sched offset of 2592480520 cycles May 13 23:52:38.848736 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:52:38.848747 kernel: tsc: Detected 2494.134 MHz processor May 13 23:52:38.848757 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:52:38.848767 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:52:38.848778 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 13 23:52:38.848790 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 23:52:38.848801 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:52:38.848810 kernel: ACPI: Early table checksum verification disabled May 13 23:52:38.848820 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 13 23:52:38.848828 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848836 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848843 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848850 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 13 23:52:38.848858 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848865 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848873 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848882 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:52:38.848889 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 13 23:52:38.848897 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 13 23:52:38.848904 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 13 23:52:38.848911 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 13 23:52:38.848919 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 13 23:52:38.848926 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 13 23:52:38.848938 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 13 23:52:38.848946 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:52:38.848953 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:52:38.848961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 23:52:38.848969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 23:52:38.848980 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 13 23:52:38.848988 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 13 23:52:38.848996 kernel: Zone ranges: May 13 23:52:38.849006 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:52:38.849013 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 13 23:52:38.849021 kernel: Normal empty May 13 23:52:38.849028 kernel: Movable zone start for each node May 13 23:52:38.849036 kernel: Early memory node ranges May 13 23:52:38.849044 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 23:52:38.849051 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 13 23:52:38.849059 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 13 23:52:38.849067 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:52:38.849076 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:52:38.849086 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 13 23:52:38.849094 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:52:38.849102 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:52:38.849109 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:52:38.849117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:52:38.849125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:52:38.849132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:52:38.849140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:52:38.849150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:52:38.849158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:52:38.849165 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:52:38.849173 kernel: TSC deadline timer available May 13 23:52:38.849181 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:52:38.849189 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:52:38.849196 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 13 23:52:38.849206 kernel: Booting paravirtualized kernel on KVM May 13 23:52:38.849227 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:52:38.849238 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:52:38.849246 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:52:38.849253 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:52:38.849261 kernel: pcpu-alloc: [0] 0 1 May 13 23:52:38.849268 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 23:52:38.849277 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:52:38.849286 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:52:38.849293 kernel: random: crng init done May 13 23:52:38.849303 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:52:38.849311 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:52:38.849318 kernel: Fallback order for Node 0: 0 May 13 23:52:38.849326 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 13 23:52:38.849334 kernel: Policy zone: DMA32 May 13 23:52:38.849341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:52:38.849349 kernel: Memory: 1967108K/2096612K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129244K reserved, 0K cma-reserved) May 13 23:52:38.849357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:52:38.849365 kernel: Kernel/User page tables isolation: enabled May 13 23:52:38.849375 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:52:38.849382 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:52:38.849390 kernel: Dynamic Preempt: voluntary May 13 23:52:38.849398 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:52:38.849407 kernel: rcu: RCU event tracing is enabled. May 13 23:52:38.849414 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:52:38.849422 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:52:38.849430 kernel: Rude variant of Tasks RCU enabled. May 13 23:52:38.849455 kernel: Tracing variant of Tasks RCU enabled. May 13 23:52:38.849463 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:52:38.849474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:52:38.849482 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 23:52:38.849490 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:52:38.849500 kernel: Console: colour VGA+ 80x25 May 13 23:52:38.849508 kernel: printk: console [tty0] enabled May 13 23:52:38.849516 kernel: printk: console [ttyS0] enabled May 13 23:52:38.849525 kernel: ACPI: Core revision 20230628 May 13 23:52:38.849533 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:52:38.849541 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:52:38.849552 kernel: x2apic enabled May 13 23:52:38.849560 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:52:38.849568 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:52:38.849579 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 13 23:52:38.849592 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) May 13 23:52:38.849604 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 23:52:38.849617 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 23:52:38.849642 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:52:38.849655 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:52:38.849668 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:52:38.849681 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 13 23:52:38.849697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:52:38.849722 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:52:38.849736 kernel: MDS: Mitigation: Clear CPU buffers May 13 23:52:38.849755 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:52:38.849768 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:52:38.849780 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:52:38.849789 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:52:38.849798 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:52:38.849807 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 23:52:38.849815 kernel: Freeing SMP alternatives memory: 32K May 13 23:52:38.849824 kernel: pid_max: default: 32768 minimum: 301 May 13 23:52:38.849833 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:52:38.849842 kernel: landlock: Up and running. May 13 23:52:38.849850 kernel: SELinux: Initializing. May 13 23:52:38.849861 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:52:38.849870 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:52:38.849879 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 13 23:52:38.849888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:52:38.849896 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:52:38.849905 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:52:38.849914 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 13 23:52:38.849922 kernel: signal: max sigframe size: 1776 May 13 23:52:38.849933 kernel: rcu: Hierarchical SRCU implementation. May 13 23:52:38.849943 kernel: rcu: Max phase no-delay instances is 400. May 13 23:52:38.849952 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:52:38.849961 kernel: smp: Bringing up secondary CPUs ... May 13 23:52:38.849969 kernel: smpboot: x86: Booting SMP configuration: May 13 23:52:38.849978 kernel: .... node #0, CPUs: #1 May 13 23:52:38.849986 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:52:38.849995 kernel: smpboot: Max logical packages: 1 May 13 23:52:38.850006 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) May 13 23:52:38.850015 kernel: devtmpfs: initialized May 13 23:52:38.850026 kernel: x86/mm: Memory block size: 128MB May 13 23:52:38.850035 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:52:38.850044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:52:38.850052 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:52:38.850061 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:52:38.850070 kernel: audit: initializing netlink subsys (disabled) May 13 23:52:38.850079 kernel: audit: type=2000 audit(1747180358.210:1): state=initialized audit_enabled=0 res=1 May 13 23:52:38.850087 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:52:38.850096 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:52:38.850106 kernel: cpuidle: using governor menu May 13 23:52:38.850115 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:52:38.850124 kernel: dca service started, version 1.12.1 May 13 23:52:38.850132 kernel: PCI: Using configuration type 1 for base access May 13 23:52:38.850141 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:52:38.850150 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:52:38.850158 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:52:38.850167 kernel: ACPI: Added _OSI(Module Device) May 13 23:52:38.850176 kernel: ACPI: Added _OSI(Processor Device) May 13 23:52:38.850186 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:52:38.850195 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:52:38.850204 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:52:38.850212 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:52:38.850230 kernel: ACPI: Interpreter enabled May 13 23:52:38.850239 kernel: ACPI: PM: (supports S0 S5) May 13 23:52:38.850248 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:52:38.850256 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:52:38.850265 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:52:38.850277 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 23:52:38.851858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:52:38.852046 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 23:52:38.852187 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 23:52:38.852316 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 23:52:38.852333 kernel: acpiphp: Slot [3] registered May 13 23:52:38.852346 kernel: acpiphp: Slot [4] registered May 13 23:52:38.852365 kernel: acpiphp: Slot [5] registered May 13 23:52:38.852377 kernel: acpiphp: Slot [6] registered May 13 23:52:38.852391 kernel: acpiphp: Slot [7] registered May 13 23:52:38.852401 kernel: acpiphp: Slot [8] registered May 13 23:52:38.852410 kernel: acpiphp: Slot [9] registered May 13 23:52:38.852418 kernel: acpiphp: Slot [10] registered May 13 23:52:38.852427 kernel: acpiphp: Slot [11] registered May 13 23:52:38.852436 kernel: acpiphp: Slot [12] registered May 13 23:52:38.852445 kernel: acpiphp: Slot [13] registered May 13 23:52:38.852456 kernel: acpiphp: Slot [14] registered May 13 23:52:38.852464 kernel: acpiphp: Slot [15] registered May 13 23:52:38.852473 kernel: acpiphp: Slot [16] registered May 13 23:52:38.852482 kernel: acpiphp: Slot [17] registered May 13 23:52:38.852490 kernel: acpiphp: Slot [18] registered May 13 23:52:38.852499 kernel: acpiphp: Slot [19] registered May 13 23:52:38.852508 kernel: acpiphp: Slot [20] registered May 13 23:52:38.852516 kernel: acpiphp: Slot [21] registered May 13 23:52:38.852525 kernel: acpiphp: Slot [22] registered May 13 23:52:38.852533 kernel: acpiphp: Slot [23] registered May 13 23:52:38.852544 kernel: acpiphp: Slot [24] registered May 13 23:52:38.852553 kernel: acpiphp: Slot [25] registered May 13 23:52:38.852561 kernel: acpiphp: Slot [26] registered May 13 23:52:38.852570 kernel: acpiphp: Slot [27] registered May 13 23:52:38.852578 kernel: acpiphp: Slot [28] registered May 13 23:52:38.852587 kernel: acpiphp: Slot [29] registered May 13 23:52:38.852595 kernel: acpiphp: Slot [30] registered May 13 23:52:38.852604 kernel: acpiphp: Slot [31] registered May 13 23:52:38.852612 kernel: PCI host bridge to bus 0000:00 May 13 23:52:38.852745 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:52:38.852843 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:52:38.852932 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:52:38.853032 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 13 23:52:38.853116 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 13 23:52:38.853199 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:52:38.853366 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 23:52:38.853479 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 23:52:38.853584 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 23:52:38.853677 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 13 23:52:38.853787 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 23:52:38.853881 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 23:52:38.853976 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 23:52:38.854073 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 23:52:38.854197 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 13 23:52:38.854346 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 13 23:52:38.854463 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 23:52:38.854558 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 23:52:38.854653 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 23:52:38.854755 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 23:52:38.855333 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 23:52:38.855436 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 13 23:52:38.855530 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 13 23:52:38.855618 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 13 23:52:38.855706 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:52:38.855812 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 23:52:38.855916 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 13 23:52:38.856008 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 13 23:52:38.856096 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 13 23:52:38.856195 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:52:38.858400 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 13 23:52:38.858609 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 13 23:52:38.858736 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 13 23:52:38.858856 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 13 23:52:38.858953 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 13 23:52:38.859057 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 13 23:52:38.859144 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 13 23:52:38.859248 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 13 23:52:38.859339 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 13 23:52:38.859427 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 13 23:52:38.859527 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 13 23:52:38.859644 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 13 23:52:38.859734 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 13 23:52:38.859823 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 13 23:52:38.859911 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 13 23:52:38.860012 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 13 23:52:38.860102 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 13 23:52:38.860194 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 13 23:52:38.860205 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:52:38.860248 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:52:38.860258 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:52:38.860266 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:52:38.860274 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 23:52:38.860283 kernel: iommu: Default domain type: Translated May 13 23:52:38.860295 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:52:38.860303 kernel: PCI: Using ACPI for IRQ routing May 13 23:52:38.860312 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:52:38.860320 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 23:52:38.860329 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 13 23:52:38.860419 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 23:52:38.860508 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 23:52:38.860634 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:52:38.860647 kernel: vgaarb: loaded May 13 23:52:38.860659 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:52:38.860668 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:52:38.860676 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:52:38.860685 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:52:38.860693 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:52:38.860702 kernel: pnp: PnP ACPI init May 13 23:52:38.860710 kernel: pnp: PnP ACPI: found 4 devices May 13 23:52:38.860719 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:52:38.860727 kernel: NET: Registered PF_INET protocol family May 13 23:52:38.860738 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:52:38.860746 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 23:52:38.860755 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:52:38.860763 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:52:38.860772 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:52:38.860780 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 23:52:38.860788 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:52:38.860797 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:52:38.860805 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:52:38.860816 kernel: NET: Registered PF_XDP protocol family May 13 23:52:38.860902 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:52:38.860992 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:52:38.861071 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:52:38.861164 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 13 23:52:38.861290 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 13 23:52:38.861416 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 23:52:38.861510 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 23:52:38.861527 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 23:52:38.861615 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 26314 usecs May 13 23:52:38.861626 kernel: PCI: CLS 0 bytes, default 64 May 13 23:52:38.861635 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:52:38.861644 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 13 23:52:38.861653 kernel: Initialise system trusted keyrings May 13 23:52:38.861661 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 23:52:38.861669 kernel: Key type asymmetric registered May 13 23:52:38.861681 kernel: Asymmetric key parser 'x509' registered May 13 23:52:38.861689 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:52:38.861702 kernel: io scheduler mq-deadline registered May 13 23:52:38.861723 kernel: io scheduler kyber registered May 13 23:52:38.861734 kernel: io scheduler bfq registered May 13 23:52:38.861759 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:52:38.861769 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 23:52:38.861778 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 23:52:38.861786 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 23:52:38.861795 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:52:38.861807 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:52:38.861816 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:52:38.861825 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:52:38.861839 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:52:38.861975 kernel: rtc_cmos 00:03: RTC can wake from S4 May 13 23:52:38.862065 kernel: rtc_cmos 00:03: registered as rtc0 May 13 23:52:38.862077 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:52:38.862165 kernel: rtc_cmos 00:03: setting system clock to 2025-05-13T23:52:38 UTC (1747180358) May 13 23:52:38.863318 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 13 23:52:38.863336 kernel: intel_pstate: CPU model not supported May 13 23:52:38.863345 kernel: NET: Registered PF_INET6 protocol family May 13 23:52:38.863354 kernel: Segment Routing with IPv6 May 13 23:52:38.863362 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:52:38.863371 kernel: NET: Registered PF_PACKET protocol family May 13 23:52:38.863380 kernel: Key type dns_resolver registered May 13 23:52:38.863388 kernel: IPI shorthand broadcast: enabled May 13 23:52:38.863402 kernel: sched_clock: Marking stable (701003086, 81177931)->(866760436, -84579419) May 13 23:52:38.863411 kernel: registered taskstats version 1 May 13 23:52:38.863419 kernel: Loading compiled-in X.509 certificates May 13 23:52:38.863428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:52:38.863436 kernel: Key type .fscrypt registered May 13 23:52:38.863444 kernel: Key type fscrypt-provisioning registered May 13 23:52:38.863453 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:52:38.863461 kernel: ima: Allocated hash algorithm: sha1 May 13 23:52:38.863469 kernel: ima: No architecture policies found May 13 23:52:38.863480 kernel: clk: Disabling unused clocks May 13 23:52:38.863489 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:52:38.863497 kernel: Write protecting the kernel read-only data: 40960k May 13 23:52:38.863506 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:52:38.863529 kernel: Run /init as init process May 13 23:52:38.863540 kernel: with arguments: May 13 23:52:38.863549 kernel: /init May 13 23:52:38.863557 kernel: with environment: May 13 23:52:38.863566 kernel: HOME=/ May 13 23:52:38.863576 kernel: TERM=linux May 13 23:52:38.863585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:52:38.863595 systemd[1]: Successfully made /usr/ read-only. May 13 23:52:38.863607 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:52:38.863616 systemd[1]: Detected virtualization kvm. May 13 23:52:38.863625 systemd[1]: Detected architecture x86-64. May 13 23:52:38.863634 systemd[1]: Running in initrd. May 13 23:52:38.863643 systemd[1]: No hostname configured, using default hostname. May 13 23:52:38.863655 systemd[1]: Hostname set to . May 13 23:52:38.863664 systemd[1]: Initializing machine ID from VM UUID. May 13 23:52:38.863673 systemd[1]: Queued start job for default target initrd.target. May 13 23:52:38.863699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:52:38.863709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:52:38.863720 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:52:38.863729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:52:38.863739 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:52:38.863752 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:52:38.863763 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:52:38.863773 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:52:38.863783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:52:38.863792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:52:38.863802 systemd[1]: Reached target paths.target - Path Units. May 13 23:52:38.863814 systemd[1]: Reached target slices.target - Slice Units. May 13 23:52:38.863824 systemd[1]: Reached target swap.target - Swaps. May 13 23:52:38.863836 systemd[1]: Reached target timers.target - Timer Units. May 13 23:52:38.863846 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:52:38.863855 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:52:38.863865 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:52:38.863878 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:52:38.863887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:52:38.863897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:52:38.863907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:52:38.863917 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:52:38.863926 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:52:38.863936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:52:38.863951 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:52:38.863970 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:52:38.863984 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:52:38.863998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:52:38.864011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:38.864025 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:52:38.864038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:52:38.864056 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:52:38.864105 systemd-journald[182]: Collecting audit messages is disabled. May 13 23:52:38.864137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:52:38.864156 systemd-journald[182]: Journal started May 13 23:52:38.864187 systemd-journald[182]: Runtime Journal (/run/log/journal/dc1235a585804d5ea61b1c9a21ed704c) is 4.9M, max 39.3M, 34.3M free. May 13 23:52:38.866593 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:52:38.858571 systemd-modules-load[183]: Inserted module 'overlay' May 13 23:52:38.874421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:52:38.910438 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:52:38.910476 kernel: Bridge firewalling registered May 13 23:52:38.892081 systemd-modules-load[183]: Inserted module 'br_netfilter' May 13 23:52:38.910283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:52:38.915404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:38.915874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:52:38.919757 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:52:38.921929 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:52:38.924875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:52:38.934631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:52:38.943340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:52:38.947946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:52:38.951864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:52:38.954685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:52:38.964932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:52:38.989960 systemd-resolved[216]: Positive Trust Anchors: May 13 23:52:38.989972 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:52:38.990010 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:52:38.993371 systemd-resolved[216]: Defaulting to hostname 'linux'. May 13 23:52:38.994425 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:52:38.995474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:52:38.998306 dracut-cmdline[219]: dracut-dracut-053 May 13 23:52:39.000531 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:52:39.083257 kernel: SCSI subsystem initialized May 13 23:52:39.092243 kernel: Loading iSCSI transport class v2.0-870. May 13 23:52:39.102248 kernel: iscsi: registered transport (tcp) May 13 23:52:39.122261 kernel: iscsi: registered transport (qla4xxx) May 13 23:52:39.122329 kernel: QLogic iSCSI HBA Driver May 13 23:52:39.164749 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:52:39.167395 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:52:39.202235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:52:39.204207 kernel: device-mapper: uevent: version 1.0.3 May 13 23:52:39.204256 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:52:39.246238 kernel: raid6: avx2x4 gen() 16374 MB/s May 13 23:52:39.263250 kernel: raid6: avx2x2 gen() 16823 MB/s May 13 23:52:39.280425 kernel: raid6: avx2x1 gen() 12736 MB/s May 13 23:52:39.280472 kernel: raid6: using algorithm avx2x2 gen() 16823 MB/s May 13 23:52:39.298276 kernel: raid6: .... xor() 15189 MB/s, rmw enabled May 13 23:52:39.298332 kernel: raid6: using avx2x2 recovery algorithm May 13 23:52:39.323248 kernel: xor: automatically using best checksumming function avx May 13 23:52:39.490251 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:52:39.501118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:52:39.503430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:52:39.527488 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 13 23:52:39.532823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:52:39.535964 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:52:39.556250 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 13 23:52:39.588559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:52:39.592366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:52:39.670168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:52:39.674022 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:52:39.704965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:52:39.707332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:52:39.708132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:52:39.708888 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:52:39.711478 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:52:39.739014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:52:39.759354 kernel: scsi host0: Virtio SCSI HBA May 13 23:52:39.771525 kernel: ACPI: bus type USB registered May 13 23:52:39.771588 kernel: usbcore: registered new interface driver usbfs May 13 23:52:39.771601 kernel: usbcore: registered new interface driver hub May 13 23:52:39.788242 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:52:39.792242 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 13 23:52:39.794241 kernel: usbcore: registered new device driver usb May 13 23:52:39.798400 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 13 23:52:39.810313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:52:39.810378 kernel: GPT:9289727 != 125829119 May 13 23:52:39.810396 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:52:39.810414 kernel: GPT:9289727 != 125829119 May 13 23:52:39.810429 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:52:39.810445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:52:39.819660 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 13 23:52:39.819839 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 13 23:52:39.821771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:52:39.822398 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:52:39.823336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:52:39.824133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:52:39.824709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:39.825594 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:39.831789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:39.833740 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:52:39.842570 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:52:39.842607 kernel: AES CTR mode by8 optimization enabled May 13 23:52:39.842621 kernel: libata version 3.00 loaded. May 13 23:52:39.847276 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 23:52:39.850577 kernel: scsi host1: ata_piix May 13 23:52:39.857335 kernel: scsi host2: ata_piix May 13 23:52:39.864138 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 13 23:52:39.864173 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 13 23:52:39.901240 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (455) May 13 23:52:39.906345 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (460) May 13 23:52:39.909954 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:52:39.922760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:39.928042 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 13 23:52:39.928312 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 13 23:52:39.930469 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 13 23:52:39.930753 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 13 23:52:39.931839 kernel: hub 1-0:1.0: USB hub found May 13 23:52:39.932665 kernel: hub 1-0:1.0: 2 ports detected May 13 23:52:39.935478 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:52:39.943003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:52:39.943501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:52:39.952138 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:52:39.953358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:52:39.956401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:52:39.969168 disk-uuid[532]: Primary Header is updated. May 13 23:52:39.969168 disk-uuid[532]: Secondary Entries is updated. May 13 23:52:39.969168 disk-uuid[532]: Secondary Header is updated. May 13 23:52:39.986238 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:52:39.984367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:52:39.997237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:52:40.993377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:52:40.995250 disk-uuid[533]: The operation has completed successfully. May 13 23:52:41.043848 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:52:41.044371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:52:41.076916 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:52:41.089330 sh[562]: Success May 13 23:52:41.101253 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:52:41.159053 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:52:41.163340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:52:41.170908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:52:41.182235 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:52:41.182285 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:52:41.182298 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:52:41.182351 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:52:41.183591 kernel: BTRFS info (device dm-0): using free space tree May 13 23:52:41.191496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:52:41.192390 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:52:41.193345 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:52:41.195372 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:52:41.223749 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:52:41.223809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:52:41.223824 kernel: BTRFS info (device vda6): using free space tree May 13 23:52:41.227257 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:52:41.233483 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:52:41.234500 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:52:41.236369 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:52:41.339356 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:52:41.343438 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:52:41.363434 ignition[655]: Ignition 2.20.0 May 13 23:52:41.363959 ignition[655]: Stage: fetch-offline May 13 23:52:41.364300 ignition[655]: no configs at "/usr/lib/ignition/base.d" May 13 23:52:41.364311 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:41.364773 ignition[655]: parsed url from cmdline: "" May 13 23:52:41.364777 ignition[655]: no config URL provided May 13 23:52:41.364783 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:52:41.364793 ignition[655]: no config at "/usr/lib/ignition/user.ign" May 13 23:52:41.364800 ignition[655]: failed to fetch config: resource requires networking May 13 23:52:41.364969 ignition[655]: Ignition finished successfully May 13 23:52:41.367518 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:52:41.382889 systemd-networkd[744]: lo: Link UP May 13 23:52:41.382900 systemd-networkd[744]: lo: Gained carrier May 13 23:52:41.385516 systemd-networkd[744]: Enumeration completed May 13 23:52:41.385900 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:52:41.385905 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 13 23:52:41.386794 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:52:41.386798 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:52:41.387439 systemd-networkd[744]: eth0: Link UP May 13 23:52:41.387443 systemd-networkd[744]: eth0: Gained carrier May 13 23:52:41.387451 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:52:41.388468 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:52:41.389807 systemd[1]: Reached target network.target - Network. May 13 23:52:41.391158 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:52:41.393460 systemd-networkd[744]: eth1: Link UP May 13 23:52:41.393464 systemd-networkd[744]: eth1: Gained carrier May 13 23:52:41.393472 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:52:41.404277 systemd-networkd[744]: eth0: DHCPv4 address 137.184.15.248/20, gateway 137.184.0.1 acquired from 169.254.169.253 May 13 23:52:41.408289 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.24/20 acquired from 169.254.169.253 May 13 23:52:41.419601 ignition[752]: Ignition 2.20.0 May 13 23:52:41.419611 ignition[752]: Stage: fetch May 13 23:52:41.419777 ignition[752]: no configs at "/usr/lib/ignition/base.d" May 13 23:52:41.419788 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:41.419886 ignition[752]: parsed url from cmdline: "" May 13 23:52:41.419890 ignition[752]: no config URL provided May 13 23:52:41.419895 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:52:41.419903 ignition[752]: no config at "/usr/lib/ignition/user.ign" May 13 23:52:41.419925 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 13 23:52:41.444303 ignition[752]: GET result: OK May 13 23:52:41.444540 ignition[752]: parsing config with SHA512: 7e4da6b18963975784f6fab0431fb7fa11320b68e48acc064475ad6ff64721b8d78fbc07d745ae57b89a3766dc2869e74c3b7a7d8cabdb0b0817f516087da0e7 May 13 23:52:41.450375 unknown[752]: fetched base config from "system" May 13 23:52:41.450385 unknown[752]: fetched base config from "system" May 13 23:52:41.450392 unknown[752]: fetched user config from "digitalocean" May 13 23:52:41.451140 ignition[752]: fetch: fetch complete May 13 23:52:41.452768 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:52:41.451145 ignition[752]: fetch: fetch passed May 13 23:52:41.451228 ignition[752]: Ignition finished successfully May 13 23:52:41.454562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:52:41.488304 ignition[760]: Ignition 2.20.0 May 13 23:52:41.488315 ignition[760]: Stage: kargs May 13 23:52:41.488499 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 23:52:41.488509 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:41.489308 ignition[760]: kargs: kargs passed May 13 23:52:41.489355 ignition[760]: Ignition finished successfully May 13 23:52:41.490800 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:52:41.493364 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:52:41.521597 ignition[767]: Ignition 2.20.0 May 13 23:52:41.521609 ignition[767]: Stage: disks May 13 23:52:41.521832 ignition[767]: no configs at "/usr/lib/ignition/base.d" May 13 23:52:41.521847 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:41.522710 ignition[767]: disks: disks passed May 13 23:52:41.524250 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:52:41.522757 ignition[767]: Ignition finished successfully May 13 23:52:41.527751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:52:41.528113 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:52:41.528709 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:52:41.529285 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:52:41.529969 systemd[1]: Reached target basic.target - Basic System. May 13 23:52:41.532360 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:52:41.556668 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:52:41.559957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:52:41.562277 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:52:41.669235 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:52:41.669815 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:52:41.671026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:52:41.673199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:52:41.677305 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:52:41.681305 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 13 23:52:41.685663 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:52:41.686833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:52:41.696265 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (783) May 13 23:52:41.696290 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:52:41.696304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:52:41.696322 kernel: BTRFS info (device vda6): using free space tree May 13 23:52:41.686867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:52:41.698253 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:52:41.700609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:52:41.701440 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:52:41.705367 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:52:41.770156 coreos-metadata[785]: May 13 23:52:41.769 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:52:41.775232 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:52:41.780670 coreos-metadata[786]: May 13 23:52:41.780 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:52:41.781760 coreos-metadata[785]: May 13 23:52:41.781 INFO Fetch successful May 13 23:52:41.783276 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory May 13 23:52:41.787835 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 13 23:52:41.788450 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 13 23:52:41.790496 coreos-metadata[786]: May 13 23:52:41.790 INFO Fetch successful May 13 23:52:41.791552 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:52:41.795686 coreos-metadata[786]: May 13 23:52:41.795 INFO wrote hostname ci-4284.0.0-n-c1d987daf9 to /sysroot/etc/hostname May 13 23:52:41.797099 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:52:41.798120 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:52:41.879138 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:52:41.880915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:52:41.882341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:52:41.901245 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:52:41.918660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:52:41.933093 ignition[906]: INFO : Ignition 2.20.0 May 13 23:52:41.933093 ignition[906]: INFO : Stage: mount May 13 23:52:41.934022 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:52:41.934022 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:41.935278 ignition[906]: INFO : mount: mount passed May 13 23:52:41.935278 ignition[906]: INFO : Ignition finished successfully May 13 23:52:41.936089 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:52:41.937795 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:52:42.180676 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:52:42.182796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:52:42.202231 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (917) May 13 23:52:42.204798 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:52:42.204830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:52:42.204842 kernel: BTRFS info (device vda6): using free space tree May 13 23:52:42.218961 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:52:42.218658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:52:42.245320 ignition[933]: INFO : Ignition 2.20.0 May 13 23:52:42.245320 ignition[933]: INFO : Stage: files May 13 23:52:42.245320 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:52:42.245320 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:42.247431 ignition[933]: DEBUG : files: compiled without relabeling support, skipping May 13 23:52:42.247431 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:52:42.247431 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:52:42.250012 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:52:42.250682 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:52:42.250682 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:52:42.250455 unknown[933]: wrote ssh authorized keys file for user: core May 13 23:52:42.252230 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:52:42.252230 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 23:52:42.328451 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:52:42.424336 systemd-networkd[744]: eth0: Gained IPv6LL May 13 23:52:42.708721 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:52:42.708721 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:52:42.710800 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 23:52:42.744445 systemd-networkd[744]: eth1: Gained IPv6LL May 13 23:52:42.996696 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:52:43.356954 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:52:43.356954 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:52:43.359917 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:52:43.359917 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:52:43.359917 ignition[933]: INFO : files: files passed May 13 23:52:43.359917 ignition[933]: INFO : Ignition finished successfully May 13 23:52:43.361575 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:52:43.363752 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:52:43.366327 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:52:43.378116 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:52:43.378205 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:52:43.385306 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:52:43.385306 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:52:43.387565 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:52:43.389194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:52:43.389788 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:52:43.391013 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:52:43.435427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:52:43.435538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:52:43.436354 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:52:43.436772 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:52:43.437410 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:52:43.439332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:52:43.468295 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:52:43.470765 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:52:43.490014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:52:43.490852 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:52:43.491707 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:52:43.492375 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:52:43.492504 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:52:43.493877 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:52:43.494621 systemd[1]: Stopped target basic.target - Basic System. May 13 23:52:43.495387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:52:43.496177 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:52:43.497022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:52:43.497486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:52:43.498171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:52:43.498827 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:52:43.499465 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:52:43.500045 systemd[1]: Stopped target swap.target - Swaps. May 13 23:52:43.500547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:52:43.500670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:52:43.501355 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:52:43.501826 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:52:43.502410 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:52:43.502499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:52:43.503056 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:52:43.503191 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:52:43.503913 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:52:43.504013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:52:43.504719 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:52:43.504804 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:52:43.505302 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:52:43.505422 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:52:43.507414 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:52:43.507887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:52:43.509326 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:52:43.512361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:52:43.512998 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:52:43.513112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:52:43.514550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:52:43.514646 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:52:43.517951 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:52:43.520267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:52:43.531123 ignition[987]: INFO : Ignition 2.20.0 May 13 23:52:43.531123 ignition[987]: INFO : Stage: umount May 13 23:52:43.533426 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:52:43.534278 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:52:43.536082 ignition[987]: INFO : umount: umount passed May 13 23:52:43.536082 ignition[987]: INFO : Ignition finished successfully May 13 23:52:43.535950 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:52:43.536068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:52:43.536872 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:52:43.536963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:52:43.540319 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:52:43.540363 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:52:43.540888 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:52:43.540924 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:52:43.541451 systemd[1]: Stopped target network.target - Network. May 13 23:52:43.541995 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:52:43.542038 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:52:43.542605 systemd[1]: Stopped target paths.target - Path Units. May 13 23:52:43.543127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:52:43.547257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:52:43.547598 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:52:43.548306 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:52:43.548862 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:52:43.548899 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:52:43.549391 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:52:43.549422 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:52:43.549937 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:52:43.549978 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:52:43.550502 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:52:43.550536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:52:43.551151 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:52:43.551856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:52:43.553597 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:52:43.554172 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:52:43.554354 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:52:43.556429 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:52:43.556527 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:52:43.557266 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:52:43.557368 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:52:43.560429 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:52:43.560950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:52:43.561035 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:52:43.562477 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:52:43.564168 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:52:43.564282 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:52:43.565779 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:52:43.565970 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:52:43.566001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:52:43.568320 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:52:43.569007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:52:43.569432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:52:43.570397 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:52:43.570756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:52:43.571641 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:52:43.571683 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:52:43.572317 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:52:43.575053 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:52:43.584624 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:52:43.584769 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:52:43.585490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:52:43.585536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:52:43.586135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:52:43.586168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:52:43.586780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:52:43.586822 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:52:43.587689 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:52:43.587727 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:52:43.588305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:52:43.588342 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:52:43.590361 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:52:43.590707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:52:43.590756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:52:43.593314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:52:43.593359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:43.595266 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:52:43.595350 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:52:43.607955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:52:43.608099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:52:43.609647 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:52:43.611517 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:52:43.629602 systemd[1]: Switching root. May 13 23:52:43.664812 systemd-journald[182]: Journal stopped May 13 23:52:44.767558 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). May 13 23:52:44.767651 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:52:44.767672 kernel: SELinux: policy capability open_perms=1 May 13 23:52:44.767693 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:52:44.767705 kernel: SELinux: policy capability always_check_network=0 May 13 23:52:44.767726 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:52:44.767744 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:52:44.767764 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:52:44.767776 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:52:44.767788 kernel: audit: type=1403 audit(1747180363.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:52:44.767801 systemd[1]: Successfully loaded SELinux policy in 35.699ms. May 13 23:52:44.767826 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.206ms. May 13 23:52:44.767840 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:52:44.767853 systemd[1]: Detected virtualization kvm. May 13 23:52:44.767866 systemd[1]: Detected architecture x86-64. May 13 23:52:44.767884 systemd[1]: Detected first boot. May 13 23:52:44.767896 systemd[1]: Hostname set to . May 13 23:52:44.767909 systemd[1]: Initializing machine ID from VM UUID. May 13 23:52:44.767922 zram_generator::config[1031]: No configuration found. May 13 23:52:44.767936 kernel: Guest personality initialized and is inactive May 13 23:52:44.767953 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:52:44.767964 kernel: Initialized host personality May 13 23:52:44.767976 kernel: NET: Registered PF_VSOCK protocol family May 13 23:52:44.767988 systemd[1]: Populated /etc with preset unit settings. May 13 23:52:44.768007 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:52:44.768021 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:52:44.768033 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:52:44.768046 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:52:44.768073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:52:44.768086 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:52:44.768099 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:52:44.768111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:52:44.768124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:52:44.768143 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:52:44.768156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:52:44.768169 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:52:44.768182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:52:44.768195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:52:44.768207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:52:44.776256 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:52:44.776304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:52:44.776320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:52:44.776333 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:52:44.776347 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:52:44.776360 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:52:44.776385 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:52:44.776398 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:52:44.776417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:52:44.776430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:52:44.776450 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:52:44.776462 systemd[1]: Reached target slices.target - Slice Units. May 13 23:52:44.776475 systemd[1]: Reached target swap.target - Swaps. May 13 23:52:44.776488 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:52:44.776500 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:52:44.776514 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:52:44.776527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:52:44.776539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:52:44.776558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:52:44.776571 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:52:44.776583 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:52:44.776596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:52:44.776609 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:52:44.776621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:44.776635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:52:44.776646 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:52:44.776666 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:52:44.776679 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:52:44.776693 systemd[1]: Reached target machines.target - Containers. May 13 23:52:44.776705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:52:44.776717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:52:44.776730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:52:44.776743 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:52:44.776755 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:52:44.776768 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:52:44.776786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:52:44.776798 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:52:44.776810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:52:44.776823 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:52:44.776836 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:52:44.776848 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:52:44.776860 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:52:44.776872 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:52:44.776891 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:52:44.776905 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:52:44.776919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:52:44.776931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:52:44.776945 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:52:44.776957 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:52:44.776970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:52:44.776982 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:52:44.776995 systemd[1]: Stopped verity-setup.service. May 13 23:52:44.777014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:44.777027 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:52:44.777045 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:52:44.777058 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:52:44.777072 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:52:44.777085 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:52:44.777098 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:52:44.777111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:52:44.777123 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:52:44.777137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:52:44.777155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:52:44.777168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:52:44.777180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:52:44.777193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:52:44.777206 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:52:44.777228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:52:44.777241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:52:44.777254 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:52:44.777272 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:52:44.777284 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:52:44.777297 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:52:44.777310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:52:44.777323 kernel: loop: module loaded May 13 23:52:44.777337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:52:44.777349 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:52:44.777363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:52:44.777375 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:52:44.777394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:52:44.777412 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:52:44.777424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:52:44.777437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:52:44.777450 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:52:44.777462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:52:44.777475 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:52:44.777488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:52:44.777500 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:52:44.777519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:52:44.777565 systemd-journald[1108]: Collecting audit messages is disabled. May 13 23:52:44.777598 systemd-journald[1108]: Journal started May 13 23:52:44.777624 systemd-journald[1108]: Runtime Journal (/run/log/journal/dc1235a585804d5ea61b1c9a21ed704c) is 4.9M, max 39.3M, 34.3M free. May 13 23:52:44.800273 kernel: ACPI: bus type drm_connector registered May 13 23:52:44.800356 kernel: fuse: init (API version 7.39) May 13 23:52:44.403986 systemd[1]: Queued start job for default target multi-user.target. May 13 23:52:44.415860 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:52:44.416377 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:52:44.811985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:52:44.812071 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:52:44.815486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:52:44.816154 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:52:44.816967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:52:44.817942 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:52:44.818669 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:52:44.838656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:52:44.850563 kernel: loop0: detected capacity change from 0 to 151640 May 13 23:52:44.867072 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:52:44.885325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:52:44.889473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:52:44.893444 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:52:44.896531 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:52:44.897492 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:52:44.901452 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:52:44.919243 kernel: loop1: detected capacity change from 0 to 218376 May 13 23:52:44.944819 systemd-journald[1108]: Time spent on flushing to /var/log/journal/dc1235a585804d5ea61b1c9a21ed704c is 67.767ms for 1004 entries. May 13 23:52:44.944819 systemd-journald[1108]: System Journal (/var/log/journal/dc1235a585804d5ea61b1c9a21ed704c) is 8M, max 195.6M, 187.6M free. May 13 23:52:45.045355 systemd-journald[1108]: Received client request to flush runtime journal. May 13 23:52:45.045562 kernel: loop2: detected capacity change from 0 to 8 May 13 23:52:45.045583 kernel: loop3: detected capacity change from 0 to 109808 May 13 23:52:44.961495 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:52:44.966323 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:52:45.013563 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:52:45.018373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:52:45.048077 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:52:45.054427 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 13 23:52:45.054445 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 13 23:52:45.062303 kernel: loop4: detected capacity change from 0 to 151640 May 13 23:52:45.069349 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:52:45.080834 kernel: loop5: detected capacity change from 0 to 218376 May 13 23:52:45.094238 kernel: loop6: detected capacity change from 0 to 8 May 13 23:52:45.103257 kernel: loop7: detected capacity change from 0 to 109808 May 13 23:52:45.127167 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 13 23:52:45.127812 (sd-merge)[1179]: Merged extensions into '/usr'. May 13 23:52:45.140818 systemd[1]: Reload requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:52:45.142117 systemd[1]: Reloading... May 13 23:52:45.275367 zram_generator::config[1211]: No configuration found. May 13 23:52:45.445246 ldconfig[1126]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:52:45.488062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:52:45.556851 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:52:45.557235 systemd[1]: Reloading finished in 413 ms. May 13 23:52:45.573101 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:52:45.573995 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:52:45.581345 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:52:45.589518 systemd[1]: Starting ensure-sysext.service... May 13 23:52:45.594344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:52:45.609082 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:52:45.618306 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... May 13 23:52:45.618324 systemd[1]: Reloading... May 13 23:52:45.641677 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:52:45.641963 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:52:45.642864 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:52:45.643120 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 13 23:52:45.643180 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 13 23:52:45.648375 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:52:45.648574 systemd-tmpfiles[1253]: Skipping /boot May 13 23:52:45.665356 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:52:45.665518 systemd-tmpfiles[1253]: Skipping /boot May 13 23:52:45.745251 zram_generator::config[1283]: No configuration found. May 13 23:52:45.870901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:52:45.940757 systemd[1]: Reloading finished in 322 ms. May 13 23:52:45.953079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:52:45.959404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:52:45.966795 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:52:45.970165 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:52:45.974389 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:52:45.992780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:52:45.998471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:52:46.002372 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:52:46.009906 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.010370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:52:46.014722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:52:46.030015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:52:46.042112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:52:46.042604 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:52:46.042729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:52:46.042834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.052824 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:52:46.054270 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:52:46.062541 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.062791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:52:46.062950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:52:46.063030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:52:46.065858 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:52:46.066537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.069609 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:52:46.076300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:52:46.076477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:52:46.077324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:52:46.077493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:52:46.081876 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:52:46.082988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:52:46.093880 systemd[1]: Finished ensure-sysext.service. May 13 23:52:46.099047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.100310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:52:46.107317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:52:46.113733 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 13 23:52:46.115803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:52:46.126353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:52:46.127466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:52:46.127517 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:52:46.136491 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:52:46.136997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.138282 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:52:46.139932 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:52:46.145193 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:52:46.147670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:52:46.147913 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:52:46.148794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:52:46.161166 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:52:46.161384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:52:46.162653 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:52:46.168486 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:52:46.169151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:52:46.169348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:52:46.170527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:52:46.178727 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:52:46.182449 augenrules[1386]: No rules May 13 23:52:46.184717 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:52:46.185319 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:52:46.283763 systemd-networkd[1375]: lo: Link UP May 13 23:52:46.283774 systemd-networkd[1375]: lo: Gained carrier May 13 23:52:46.284590 systemd-networkd[1375]: Enumeration completed May 13 23:52:46.284701 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:52:46.287450 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:52:46.290364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:52:46.326377 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:52:46.326872 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:52:46.341568 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:52:46.358496 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:52:46.365432 systemd-resolved[1332]: Positive Trust Anchors: May 13 23:52:46.365447 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:52:46.365502 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:52:46.373909 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 13 23:52:46.374264 systemd-resolved[1332]: Using system hostname 'ci-4284.0.0-n-c1d987daf9'. May 13 23:52:46.376162 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 13 23:52:46.381360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.381506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:52:46.383309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:52:46.387443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:52:46.389571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:52:46.390091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:52:46.390205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:52:46.390266 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:52:46.390283 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:52:46.390625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:52:46.392413 systemd[1]: Reached target network.target - Network. May 13 23:52:46.392781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:52:46.410481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:52:46.410960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:52:46.423324 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) May 13 23:52:46.423867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:52:46.426291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:52:46.430001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:52:46.431289 kernel: ISO 9660 Extensions: RRIP_1991A May 13 23:52:46.434261 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 13 23:52:46.437149 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:52:46.437360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:52:46.438730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:52:46.484534 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-1a:36:13:0b:a6:0e.network. May 13 23:52:46.486436 systemd-networkd[1375]: eth0: Link UP May 13 23:52:46.486524 systemd-networkd[1375]: eth0: Gained carrier May 13 23:52:46.491933 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 13 23:52:46.497877 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-16:09:75:31:b8:86.network. May 13 23:52:46.498421 systemd-networkd[1375]: eth1: Link UP May 13 23:52:46.498426 systemd-networkd[1375]: eth1: Gained carrier May 13 23:52:46.511258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:52:46.511861 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:52:46.514623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:52:46.520271 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 23:52:46.527341 kernel: ACPI: button: Power Button [PWRF] May 13 23:52:46.530272 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:52:46.547512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:52:46.577258 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 23:52:46.579559 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 23:52:46.582853 kernel: Console: switching to colour dummy device 80x25 May 13 23:52:46.582900 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 23:52:46.582915 kernel: [drm] features: -context_init May 13 23:52:46.584252 kernel: [drm] number of scanouts: 1 May 13 23:52:46.585278 kernel: [drm] number of cap sets: 0 May 13 23:52:46.587248 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 23:52:46.592805 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 23:52:46.592875 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:52:46.599239 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 23:52:46.613245 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:52:46.623268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:46.641149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:52:46.641481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:46.647358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:46.663736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:52:46.664082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:46.669494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:52:47.215408 systemd-resolved[1332]: Clock change detected. Flushing caches. May 13 23:52:47.215590 systemd-timesyncd[1365]: Contacted time server 208.113.130.146:123 (0.flatcar.pool.ntp.org). May 13 23:52:47.216151 systemd-timesyncd[1365]: Initial clock synchronization to Tue 2025-05-13 23:52:47.215307 UTC. May 13 23:52:47.257870 kernel: EDAC MC: Ver: 3.0.0 May 13 23:52:47.268506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:52:47.291186 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:52:47.295112 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:52:47.317591 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:52:47.349072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:52:47.351539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:52:47.352020 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:52:47.352222 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:52:47.352323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:52:47.352594 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:52:47.352877 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:52:47.353103 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:52:47.353196 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:52:47.353225 systemd[1]: Reached target paths.target - Path Units. May 13 23:52:47.353283 systemd[1]: Reached target timers.target - Timer Units. May 13 23:52:47.355035 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:52:47.358576 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:52:47.363503 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:52:47.364109 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:52:47.364517 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:52:47.371568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:52:47.372544 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:52:47.375068 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:52:47.377780 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:52:47.378259 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:52:47.378638 systemd[1]: Reached target basic.target - Basic System. May 13 23:52:47.381966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:52:47.381999 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:52:47.391821 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:52:47.394709 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:52:47.394828 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:52:47.401887 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:52:47.405853 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:52:47.414693 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:52:47.415211 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:52:47.420887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:52:47.426869 jq[1456]: false May 13 23:52:47.428110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:52:47.434873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:52:47.440867 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:52:47.450844 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:52:47.453066 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:52:47.453650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:52:47.458878 dbus-daemon[1455]: [system] SELinux support is enabled May 13 23:52:47.459429 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:52:47.469898 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:52:47.472689 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:52:47.477806 coreos-metadata[1454]: May 13 23:52:47.476 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:52:47.478441 coreos-metadata[1454]: May 13 23:52:47.478 INFO Fetch successful May 13 23:52:47.482770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:52:47.486318 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:52:47.486540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:52:47.488062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:52:47.488257 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:52:47.497984 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:52:47.498057 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:52:47.501915 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:52:47.502006 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 13 23:52:47.502029 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:52:47.522124 extend-filesystems[1458]: Found loop4 May 13 23:52:47.522124 extend-filesystems[1458]: Found loop5 May 13 23:52:47.522124 extend-filesystems[1458]: Found loop6 May 13 23:52:47.522124 extend-filesystems[1458]: Found loop7 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda May 13 23:52:47.522124 extend-filesystems[1458]: Found vda1 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda2 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda3 May 13 23:52:47.522124 extend-filesystems[1458]: Found usr May 13 23:52:47.522124 extend-filesystems[1458]: Found vda4 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda6 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda7 May 13 23:52:47.522124 extend-filesystems[1458]: Found vda9 May 13 23:52:47.522124 extend-filesystems[1458]: Checking size of /dev/vda9 May 13 23:52:47.596441 extend-filesystems[1458]: Resized partition /dev/vda9 May 13 23:52:47.596949 update_engine[1465]: I20250513 23:52:47.562361 1465 main.cc:92] Flatcar Update Engine starting May 13 23:52:47.596949 update_engine[1465]: I20250513 23:52:47.595561 1465 update_check_scheduler.cc:74] Next update check in 8m40s May 13 23:52:47.541706 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:52:47.597305 extend-filesystems[1496]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:52:47.625368 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 13 23:52:47.625955 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1396) May 13 23:52:47.626006 tar[1473]: linux-amd64/LICENSE May 13 23:52:47.626006 tar[1473]: linux-amd64/helm May 13 23:52:47.541976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:52:47.659293 jq[1466]: true May 13 23:52:47.572199 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:52:47.598545 systemd[1]: Started update-engine.service - Update Engine. May 13 23:52:47.640877 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:52:47.660567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:52:47.677893 jq[1493]: true May 13 23:52:47.661336 systemd-logind[1464]: New seat seat0. May 13 23:52:47.667534 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:52:47.667554 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:52:47.669819 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:52:47.698843 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 13 23:52:47.709772 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:52:47.709772 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 8 May 13 23:52:47.709772 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 13 23:52:47.725883 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:52:47.743587 extend-filesystems[1458]: Resized filesystem in /dev/vda9 May 13 23:52:47.743587 extend-filesystems[1458]: Found vdb May 13 23:52:47.739255 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:52:47.739536 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:52:47.822591 bash[1523]: Updated "/home/core/.ssh/authorized_keys" May 13 23:52:47.830785 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:52:47.837219 systemd[1]: Starting sshkeys.service... May 13 23:52:47.880483 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:52:47.898102 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:52:47.905139 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:52:47.945375 coreos-metadata[1531]: May 13 23:52:47.945 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:52:47.957739 coreos-metadata[1531]: May 13 23:52:47.954 INFO Fetch successful May 13 23:52:47.965784 unknown[1531]: wrote ssh authorized keys file for user: core May 13 23:52:48.004455 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:52:48.006344 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" May 13 23:52:48.007089 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:52:48.014133 systemd[1]: Finished sshkeys.service. May 13 23:52:48.062065 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:52:48.064108 containerd[1481]: time="2025-05-13T23:52:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:52:48.066705 containerd[1481]: time="2025-05-13T23:52:48.064972833Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:52:48.074417 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:52:48.078934 containerd[1481]: time="2025-05-13T23:52:48.078887975Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.058µs" May 13 23:52:48.079021 containerd[1481]: time="2025-05-13T23:52:48.079006625Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:52:48.079073 containerd[1481]: time="2025-05-13T23:52:48.079063833Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:52:48.079298 containerd[1481]: time="2025-05-13T23:52:48.079281115Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:52:48.079380 containerd[1481]: time="2025-05-13T23:52:48.079367236Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:52:48.079445 containerd[1481]: time="2025-05-13T23:52:48.079435102Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:52:48.079567 containerd[1481]: time="2025-05-13T23:52:48.079549408Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:52:48.079624 containerd[1481]: time="2025-05-13T23:52:48.079614011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:52:48.080049 containerd[1481]: time="2025-05-13T23:52:48.080029576Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:52:48.080111 containerd[1481]: time="2025-05-13T23:52:48.080101535Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:52:48.080184 containerd[1481]: time="2025-05-13T23:52:48.080169428Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:52:48.080325 containerd[1481]: time="2025-05-13T23:52:48.080231963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:52:48.080489 containerd[1481]: time="2025-05-13T23:52:48.080459551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:52:48.080978 containerd[1481]: time="2025-05-13T23:52:48.080954255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:52:48.081107 containerd[1481]: time="2025-05-13T23:52:48.081087388Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:52:48.081172 containerd[1481]: time="2025-05-13T23:52:48.081158162Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:52:48.081253 containerd[1481]: time="2025-05-13T23:52:48.081240602Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:52:48.081523 containerd[1481]: time="2025-05-13T23:52:48.081506819Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:52:48.081701 containerd[1481]: time="2025-05-13T23:52:48.081686238Z" level=info msg="metadata content store policy set" policy=shared May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.086972632Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087036070Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087072401Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087087502Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087101276Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087113347Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087135503Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087149186Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087159778Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087170302Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087179148Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087190735Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087318679Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:52:48.088335 containerd[1481]: time="2025-05-13T23:52:48.087339462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087353256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087364849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087385045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087407646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087421093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087431793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087443143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087454605Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087464308Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087527472Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087540685Z" level=info msg="Start snapshots syncer" May 13 23:52:48.088670 containerd[1481]: time="2025-05-13T23:52:48.087559656Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:52:48.088935 containerd[1481]: time="2025-05-13T23:52:48.087795322Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:52:48.088935 containerd[1481]: time="2025-05-13T23:52:48.087848389Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.087922000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088036557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088069254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088083463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088096411Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088109714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088124723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088136441Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088161023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088173724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:52:48.089527 containerd[1481]: time="2025-05-13T23:52:48.088184214Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091348319Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091380416Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091391070Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091402696Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091411048Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091420572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091432550Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091451226Z" level=info msg="runtime interface created" May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091456799Z" level=info msg="created NRI interface" May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091467942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091484896Z" level=info msg="Connect containerd service" May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.091533458Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:52:48.092598 containerd[1481]: time="2025-05-13T23:52:48.092246384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:52:48.109555 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:52:48.109841 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:52:48.115067 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:52:48.146581 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:52:48.152872 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:52:48.156694 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:52:48.158346 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:52:48.257472 containerd[1481]: time="2025-05-13T23:52:48.257398596Z" level=info msg="Start subscribing containerd event" May 13 23:52:48.258780 containerd[1481]: time="2025-05-13T23:52:48.257739206Z" level=info msg="Start recovering state" May 13 23:52:48.258780 containerd[1481]: time="2025-05-13T23:52:48.258003480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:52:48.258780 containerd[1481]: time="2025-05-13T23:52:48.258053899Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:52:48.259686 containerd[1481]: time="2025-05-13T23:52:48.259659254Z" level=info msg="Start event monitor" May 13 23:52:48.259792 containerd[1481]: time="2025-05-13T23:52:48.259779788Z" level=info msg="Start cni network conf syncer for default" May 13 23:52:48.259854 containerd[1481]: time="2025-05-13T23:52:48.259845345Z" level=info msg="Start streaming server" May 13 23:52:48.259922 containerd[1481]: time="2025-05-13T23:52:48.259899404Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:52:48.259964 containerd[1481]: time="2025-05-13T23:52:48.259956308Z" level=info msg="runtime interface starting up..." May 13 23:52:48.260026 containerd[1481]: time="2025-05-13T23:52:48.260004872Z" level=info msg="starting plugins..." May 13 23:52:48.260104 containerd[1481]: time="2025-05-13T23:52:48.260069462Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:52:48.262027 containerd[1481]: time="2025-05-13T23:52:48.261999481Z" level=info msg="containerd successfully booted in 0.198325s" May 13 23:52:48.263295 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:52:48.432912 tar[1473]: linux-amd64/README.md May 13 23:52:48.452152 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:52:48.592100 systemd-networkd[1375]: eth0: Gained IPv6LL May 13 23:52:48.595014 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:52:48.598693 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:52:48.602549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:52:48.607982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:52:48.632126 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:52:49.040417 systemd-networkd[1375]: eth1: Gained IPv6LL May 13 23:52:49.505006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:52:49.506431 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:52:49.508749 systemd[1]: Startup finished in 824ms (kernel) + 5.123s (initrd) + 5.268s (userspace) = 11.217s. May 13 23:52:49.526660 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:52:50.036046 kubelet[1588]: E0513 23:52:50.035977 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:52:50.039158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:52:50.039335 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:52:50.039951 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 251.6M memory peak. May 13 23:52:57.129979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:52:57.132391 systemd[1]: Started sshd@0-137.184.15.248:22-147.75.109.163:42974.service - OpenSSH per-connection server daemon (147.75.109.163:42974). May 13 23:52:57.214805 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 42974 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:57.217029 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:57.226874 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:52:57.227901 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:52:57.230318 systemd-logind[1464]: New session 1 of user core. May 13 23:52:57.261784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:52:57.264126 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:52:57.276203 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:52:57.279151 systemd-logind[1464]: New session c1 of user core. May 13 23:52:57.414825 systemd[1604]: Queued start job for default target default.target. May 13 23:52:57.425832 systemd[1604]: Created slice app.slice - User Application Slice. May 13 23:52:57.425862 systemd[1604]: Reached target paths.target - Paths. May 13 23:52:57.425905 systemd[1604]: Reached target timers.target - Timers. May 13 23:52:57.427288 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:52:57.438400 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:52:57.438508 systemd[1604]: Reached target sockets.target - Sockets. May 13 23:52:57.438552 systemd[1604]: Reached target basic.target - Basic System. May 13 23:52:57.438592 systemd[1604]: Reached target default.target - Main User Target. May 13 23:52:57.438624 systemd[1604]: Startup finished in 152ms. May 13 23:52:57.438930 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:52:57.440485 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:52:57.505950 systemd[1]: Started sshd@1-137.184.15.248:22-147.75.109.163:42976.service - OpenSSH per-connection server daemon (147.75.109.163:42976). May 13 23:52:57.560285 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 42976 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:57.561694 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:57.566425 systemd-logind[1464]: New session 2 of user core. May 13 23:52:57.581915 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:52:57.642016 sshd[1617]: Connection closed by 147.75.109.163 port 42976 May 13 23:52:57.642534 sshd-session[1615]: pam_unix(sshd:session): session closed for user core May 13 23:52:57.656124 systemd[1]: sshd@1-137.184.15.248:22-147.75.109.163:42976.service: Deactivated successfully. May 13 23:52:57.658161 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:52:57.659863 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. May 13 23:52:57.662021 systemd[1]: Started sshd@2-137.184.15.248:22-147.75.109.163:42992.service - OpenSSH per-connection server daemon (147.75.109.163:42992). May 13 23:52:57.663006 systemd-logind[1464]: Removed session 2. May 13 23:52:57.717930 sshd[1622]: Accepted publickey for core from 147.75.109.163 port 42992 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:57.719276 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:57.725462 systemd-logind[1464]: New session 3 of user core. May 13 23:52:57.730874 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:52:57.787139 sshd[1625]: Connection closed by 147.75.109.163 port 42992 May 13 23:52:57.787682 sshd-session[1622]: pam_unix(sshd:session): session closed for user core May 13 23:52:57.798252 systemd[1]: sshd@2-137.184.15.248:22-147.75.109.163:42992.service: Deactivated successfully. May 13 23:52:57.799946 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:52:57.800555 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. May 13 23:52:57.802562 systemd[1]: Started sshd@3-137.184.15.248:22-147.75.109.163:43008.service - OpenSSH per-connection server daemon (147.75.109.163:43008). May 13 23:52:57.805099 systemd-logind[1464]: Removed session 3. May 13 23:52:57.859787 sshd[1630]: Accepted publickey for core from 147.75.109.163 port 43008 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:57.861153 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:57.866218 systemd-logind[1464]: New session 4 of user core. May 13 23:52:57.878916 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:52:57.940736 sshd[1633]: Connection closed by 147.75.109.163 port 43008 May 13 23:52:57.939610 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 13 23:52:57.951169 systemd[1]: sshd@3-137.184.15.248:22-147.75.109.163:43008.service: Deactivated successfully. May 13 23:52:57.952811 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:52:57.954896 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. May 13 23:52:57.955954 systemd[1]: Started sshd@4-137.184.15.248:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012). May 13 23:52:57.957062 systemd-logind[1464]: Removed session 4. May 13 23:52:58.017309 sshd[1638]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:58.018641 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:58.023375 systemd-logind[1464]: New session 5 of user core. May 13 23:52:58.030882 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:52:58.095890 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:52:58.096172 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:52:58.111749 sudo[1642]: pam_unix(sudo:session): session closed for user root May 13 23:52:58.114850 sshd[1641]: Connection closed by 147.75.109.163 port 43012 May 13 23:52:58.115434 sshd-session[1638]: pam_unix(sshd:session): session closed for user core May 13 23:52:58.127264 systemd[1]: sshd@4-137.184.15.248:22-147.75.109.163:43012.service: Deactivated successfully. May 13 23:52:58.129361 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:52:58.130890 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. May 13 23:52:58.132373 systemd[1]: Started sshd@5-137.184.15.248:22-147.75.109.163:43438.service - OpenSSH per-connection server daemon (147.75.109.163:43438). May 13 23:52:58.133453 systemd-logind[1464]: Removed session 5. May 13 23:52:58.187549 sshd[1647]: Accepted publickey for core from 147.75.109.163 port 43438 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:58.188842 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:58.193376 systemd-logind[1464]: New session 6 of user core. May 13 23:52:58.203868 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:52:58.261647 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:52:58.261953 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:52:58.265263 sudo[1652]: pam_unix(sudo:session): session closed for user root May 13 23:52:58.270555 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:52:58.270847 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:52:58.281378 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:52:58.317672 augenrules[1674]: No rules May 13 23:52:58.319083 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:52:58.319331 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:52:58.320354 sudo[1651]: pam_unix(sudo:session): session closed for user root May 13 23:52:58.323321 sshd[1650]: Connection closed by 147.75.109.163 port 43438 May 13 23:52:58.323737 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 13 23:52:58.336010 systemd[1]: sshd@5-137.184.15.248:22-147.75.109.163:43438.service: Deactivated successfully. May 13 23:52:58.337482 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:52:58.338861 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. May 13 23:52:58.340408 systemd[1]: Started sshd@6-137.184.15.248:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). May 13 23:52:58.341344 systemd-logind[1464]: Removed session 6. May 13 23:52:58.397520 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:52:58.398760 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:52:58.403368 systemd-logind[1464]: New session 7 of user core. May 13 23:52:58.410852 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:52:58.468067 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:52:58.468699 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:52:58.872915 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:52:58.888282 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:52:59.244062 dockerd[1704]: time="2025-05-13T23:52:59.243938359Z" level=info msg="Starting up" May 13 23:52:59.246579 dockerd[1704]: time="2025-05-13T23:52:59.246150490Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:52:59.278758 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1594543446-merged.mount: Deactivated successfully. May 13 23:52:59.304013 dockerd[1704]: time="2025-05-13T23:52:59.303791708Z" level=info msg="Loading containers: start." May 13 23:52:59.450743 kernel: Initializing XFRM netlink socket May 13 23:52:59.514441 systemd-networkd[1375]: docker0: Link UP May 13 23:52:59.567337 dockerd[1704]: time="2025-05-13T23:52:59.567300786Z" level=info msg="Loading containers: done." May 13 23:52:59.583018 dockerd[1704]: time="2025-05-13T23:52:59.582984383Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:52:59.583262 dockerd[1704]: time="2025-05-13T23:52:59.583244751Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:52:59.583407 dockerd[1704]: time="2025-05-13T23:52:59.583393575Z" level=info msg="Daemon has completed initialization" May 13 23:52:59.611085 dockerd[1704]: time="2025-05-13T23:52:59.611028208Z" level=info msg="API listen on /run/docker.sock" May 13 23:52:59.611629 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:53:00.277270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:53:00.279534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:00.503381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:00.510252 containerd[1481]: time="2025-05-13T23:53:00.509576992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:53:00.511173 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:53:00.588882 kubelet[1917]: E0513 23:53:00.588689 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:53:00.593509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:53:00.593658 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:53:00.594017 systemd[1]: kubelet.service: Consumed 262ms CPU time, 102.1M memory peak. May 13 23:53:01.043218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267856361.mount: Deactivated successfully. May 13 23:53:02.005173 containerd[1481]: time="2025-05-13T23:53:02.004329741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:02.006243 containerd[1481]: time="2025-05-13T23:53:02.006159768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 23:53:02.006988 containerd[1481]: time="2025-05-13T23:53:02.006962450Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:02.009513 containerd[1481]: time="2025-05-13T23:53:02.009484633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:02.010505 containerd[1481]: time="2025-05-13T23:53:02.010480390Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.50082397s" May 13 23:53:02.010850 containerd[1481]: time="2025-05-13T23:53:02.010830544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 23:53:02.011483 containerd[1481]: time="2025-05-13T23:53:02.011445000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:53:03.244691 containerd[1481]: time="2025-05-13T23:53:03.244556224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:03.246151 containerd[1481]: time="2025-05-13T23:53:03.246077991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 23:53:03.246870 containerd[1481]: time="2025-05-13T23:53:03.246809814Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:03.249520 containerd[1481]: time="2025-05-13T23:53:03.248490864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:03.249520 containerd[1481]: time="2025-05-13T23:53:03.249371360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.237784178s" May 13 23:53:03.249520 containerd[1481]: time="2025-05-13T23:53:03.249397093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 23:53:03.250091 containerd[1481]: time="2025-05-13T23:53:03.250027537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:53:04.286671 containerd[1481]: time="2025-05-13T23:53:04.286611226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:04.288423 containerd[1481]: time="2025-05-13T23:53:04.288359165Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 23:53:04.289747 containerd[1481]: time="2025-05-13T23:53:04.289219954Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:04.291292 containerd[1481]: time="2025-05-13T23:53:04.291249793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:04.292436 containerd[1481]: time="2025-05-13T23:53:04.292093443Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.04191245s" May 13 23:53:04.292436 containerd[1481]: time="2025-05-13T23:53:04.292124690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 23:53:04.292568 containerd[1481]: time="2025-05-13T23:53:04.292553104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:53:05.212977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536462121.mount: Deactivated successfully. May 13 23:53:05.622454 containerd[1481]: time="2025-05-13T23:53:05.622327375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:05.624041 containerd[1481]: time="2025-05-13T23:53:05.623807522Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 23:53:05.624769 containerd[1481]: time="2025-05-13T23:53:05.624451279Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:05.627028 containerd[1481]: time="2025-05-13T23:53:05.627006906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:05.628060 containerd[1481]: time="2025-05-13T23:53:05.628024200Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.335445279s" May 13 23:53:05.628155 containerd[1481]: time="2025-05-13T23:53:05.628138930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 23:53:05.628837 containerd[1481]: time="2025-05-13T23:53:05.628645231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:53:06.114569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438463427.mount: Deactivated successfully. May 13 23:53:06.801620 containerd[1481]: time="2025-05-13T23:53:06.800802175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:06.802626 containerd[1481]: time="2025-05-13T23:53:06.802572284Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 23:53:06.803568 containerd[1481]: time="2025-05-13T23:53:06.803539568Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:06.805839 containerd[1481]: time="2025-05-13T23:53:06.805805960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:06.806793 containerd[1481]: time="2025-05-13T23:53:06.806768159Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.178097454s" May 13 23:53:06.806980 containerd[1481]: time="2025-05-13T23:53:06.806890003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 23:53:06.807884 containerd[1481]: time="2025-05-13T23:53:06.807811531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:53:06.809193 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 13 23:53:07.231266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022822211.mount: Deactivated successfully. May 13 23:53:07.235819 containerd[1481]: time="2025-05-13T23:53:07.235774409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:07.236432 containerd[1481]: time="2025-05-13T23:53:07.236389069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:53:07.236893 containerd[1481]: time="2025-05-13T23:53:07.236872218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:07.238900 containerd[1481]: time="2025-05-13T23:53:07.238663941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:07.239253 containerd[1481]: time="2025-05-13T23:53:07.239232164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 431.25403ms" May 13 23:53:07.239316 containerd[1481]: time="2025-05-13T23:53:07.239258656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:53:07.240214 containerd[1481]: time="2025-05-13T23:53:07.239643830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:53:07.684982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80363510.mount: Deactivated successfully. May 13 23:53:09.183362 containerd[1481]: time="2025-05-13T23:53:09.183314514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:09.184782 containerd[1481]: time="2025-05-13T23:53:09.184561961Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 23:53:09.184782 containerd[1481]: time="2025-05-13T23:53:09.184739045Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:09.188196 containerd[1481]: time="2025-05-13T23:53:09.187135004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:09.188196 containerd[1481]: time="2025-05-13T23:53:09.188058051Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.948390742s" May 13 23:53:09.188196 containerd[1481]: time="2025-05-13T23:53:09.188087848Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 23:53:09.903914 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 13 23:53:10.844216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:53:10.848709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:10.981874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:10.990169 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:53:11.055757 kubelet[2133]: E0513 23:53:11.055155 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:53:11.058106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:53:11.058624 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:53:11.059128 systemd[1]: kubelet.service: Consumed 157ms CPU time, 104.3M memory peak. May 13 23:53:11.968546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:11.968712 systemd[1]: kubelet.service: Consumed 157ms CPU time, 104.3M memory peak. May 13 23:53:11.971028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:12.006067 systemd[1]: Reload requested from client PID 2148 ('systemctl') (unit session-7.scope)... May 13 23:53:12.006082 systemd[1]: Reloading... May 13 23:53:12.109751 zram_generator::config[2192]: No configuration found. May 13 23:53:12.235112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:53:12.352750 systemd[1]: Reloading finished in 346 ms. May 13 23:53:12.416170 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:53:12.416444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:12.416510 systemd[1]: kubelet.service: Consumed 101ms CPU time, 91.8M memory peak. May 13 23:53:12.419455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:12.553018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:12.566061 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:53:12.618261 kubelet[2247]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:53:12.618261 kubelet[2247]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:53:12.618261 kubelet[2247]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:53:12.618661 kubelet[2247]: I0513 23:53:12.618369 2247 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:53:13.052864 kubelet[2247]: I0513 23:53:13.052754 2247 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:53:13.052864 kubelet[2247]: I0513 23:53:13.052787 2247 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:53:13.053473 kubelet[2247]: I0513 23:53:13.053449 2247 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:53:13.075613 kubelet[2247]: I0513 23:53:13.075265 2247 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:53:13.076926 kubelet[2247]: E0513 23:53:13.076852 2247 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.15.248:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.15.248:6443: connect: connection refused" logger="UnhandledError" May 13 23:53:13.094993 kubelet[2247]: I0513 23:53:13.094961 2247 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:53:13.100130 kubelet[2247]: I0513 23:53:13.100111 2247 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:53:13.101649 kubelet[2247]: I0513 23:53:13.101590 2247 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:53:13.101972 kubelet[2247]: I0513 23:53:13.101655 2247 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-c1d987daf9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:53:13.102146 kubelet[2247]: I0513 23:53:13.101987 2247 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:53:13.102146 kubelet[2247]: I0513 23:53:13.102000 2247 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:53:13.102204 kubelet[2247]: I0513 23:53:13.102192 2247 state_mem.go:36] "Initialized new in-memory state store" May 13 23:53:13.106443 kubelet[2247]: I0513 23:53:13.106420 2247 kubelet.go:446] "Attempting to sync node with API server" May 13 23:53:13.106443 kubelet[2247]: I0513 23:53:13.106446 2247 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:53:13.106518 kubelet[2247]: I0513 23:53:13.106480 2247 kubelet.go:352] "Adding apiserver pod source" May 13 23:53:13.106518 kubelet[2247]: I0513 23:53:13.106502 2247 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:53:13.118831 kubelet[2247]: I0513 23:53:13.118614 2247 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:53:13.119742 kubelet[2247]: W0513 23:53:13.119308 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.15.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-c1d987daf9&limit=500&resourceVersion=0": dial tcp 137.184.15.248:6443: connect: connection refused May 13 23:53:13.119742 kubelet[2247]: E0513 23:53:13.119348 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.15.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-c1d987daf9&limit=500&resourceVersion=0\": dial tcp 137.184.15.248:6443: connect: connection refused" logger="UnhandledError" May 13 23:53:13.119742 kubelet[2247]: W0513 23:53:13.119415 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.15.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.15.248:6443: connect: connection refused May 13 23:53:13.119742 kubelet[2247]: E0513 23:53:13.119480 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.15.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.15.248:6443: connect: connection refused" logger="UnhandledError" May 13 23:53:13.121848 kubelet[2247]: I0513 23:53:13.121822 2247 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:53:13.122491 kubelet[2247]: W0513 23:53:13.122469 2247 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:53:13.123556 kubelet[2247]: I0513 23:53:13.123537 2247 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:53:13.123611 kubelet[2247]: I0513 23:53:13.123586 2247 server.go:1287] "Started kubelet" May 13 23:53:13.127223 kubelet[2247]: I0513 23:53:13.127203 2247 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:53:13.133692 kubelet[2247]: I0513 23:53:13.133634 2247 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:53:13.135051 kubelet[2247]: I0513 23:53:13.134793 2247 server.go:490] "Adding debug handlers to kubelet server" May 13 23:53:13.136293 kubelet[2247]: E0513 23:53:13.134907 2247 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.15.248:6443/api/v1/namespaces/default/events\": dial tcp 137.184.15.248:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-c1d987daf9.183f3b4f82a82197 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-c1d987daf9,UID:ci-4284.0.0-n-c1d987daf9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-c1d987daf9,},FirstTimestamp:2025-05-13 23:53:13.123553687 +0000 UTC m=+0.553572665,LastTimestamp:2025-05-13 23:53:13.123553687 +0000 UTC m=+0.553572665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-c1d987daf9,}" May 13 23:53:13.137753 kubelet[2247]: I0513 23:53:13.136510 2247 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:53:13.137753 kubelet[2247]: I0513 23:53:13.136960 2247 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:53:13.139703 kubelet[2247]: I0513 23:53:13.138169 2247 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:53:13.140706 kubelet[2247]: E0513 23:53:13.140064 2247 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" May 13 23:53:13.140706 kubelet[2247]: I0513 23:53:13.140113 2247 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:53:13.140706 kubelet[2247]: I0513 23:53:13.140426 2247 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:53:13.140706 kubelet[2247]: I0513 23:53:13.140504 2247 reconciler.go:26] "Reconciler: start to sync state" May 13 23:53:13.142657 kubelet[2247]: W0513 23:53:13.142616 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.15.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.15.248:6443: connect: connection refused May 13 23:53:13.142734 kubelet[2247]: E0513 23:53:13.142678 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.15.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.15.248:6443: connect: connection refused" logger="UnhandledError" May 13 23:53:13.143235 kubelet[2247]: I0513 23:53:13.143207 2247 factory.go:221] Registration of the systemd container factory successfully May 13 23:53:13.143302 kubelet[2247]: I0513 23:53:13.143283 2247 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:53:13.146735 kubelet[2247]: E0513 23:53:13.144776 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.15.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c1d987daf9?timeout=10s\": dial tcp 137.184.15.248:6443: connect: connection refused" interval="200ms" May 13 23:53:13.146735 kubelet[2247]: I0513 23:53:13.145604 2247 factory.go:221] Registration of the containerd container factory successfully May 13 23:53:13.150438 kubelet[2247]: E0513 23:53:13.150412 2247 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:53:13.162104 kubelet[2247]: I0513 23:53:13.162064 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:53:13.163515 kubelet[2247]: I0513 23:53:13.163498 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:53:13.163601 kubelet[2247]: I0513 23:53:13.163593 2247 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:53:13.163678 kubelet[2247]: I0513 23:53:13.163669 2247 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:53:13.163736 kubelet[2247]: I0513 23:53:13.163729 2247 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:53:13.163841 kubelet[2247]: E0513 23:53:13.163821 2247 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:53:13.173008 kubelet[2247]: W0513 23:53:13.172985 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.15.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.15.248:6443: connect: connection refused May 13 23:53:13.173130 kubelet[2247]: E0513 23:53:13.173116 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.15.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.15.248:6443: connect: connection refused" logger="UnhandledError" May 13 23:53:13.178254 kubelet[2247]: I0513 23:53:13.178232 2247 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:53:13.178432 kubelet[2247]: I0513 23:53:13.178422 2247 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:53:13.178515 kubelet[2247]: I0513 23:53:13.178506 2247 state_mem.go:36] "Initialized new in-memory state store" May 13 23:53:13.182259 kubelet[2247]: I0513 23:53:13.182242 2247 policy_none.go:49] "None policy: Start" May 13 23:53:13.182414 kubelet[2247]: I0513 23:53:13.182404 2247 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:53:13.182476 kubelet[2247]: I0513 23:53:13.182469 2247 state_mem.go:35] "Initializing new in-memory state store" May 13 23:53:13.188066 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:53:13.200912 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:53:13.204573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:53:13.212743 kubelet[2247]: I0513 23:53:13.212447 2247 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:53:13.212978 kubelet[2247]: I0513 23:53:13.212917 2247 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:53:13.213028 kubelet[2247]: I0513 23:53:13.212950 2247 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:53:13.213240 kubelet[2247]: I0513 23:53:13.213226 2247 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:53:13.215607 kubelet[2247]: E0513 23:53:13.215542 2247 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:53:13.215607 kubelet[2247]: E0513 23:53:13.215586 2247 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-c1d987daf9\" not found" May 13 23:53:13.272213 systemd[1]: Created slice kubepods-burstable-pod6cecd52a8b1038a47f84cb8afaee8575.slice - libcontainer container kubepods-burstable-pod6cecd52a8b1038a47f84cb8afaee8575.slice. May 13 23:53:13.280413 kubelet[2247]: E0513 23:53:13.280378 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.283345 systemd[1]: Created slice kubepods-burstable-podb41a9d0d6837414a8ba2c8032064ef47.slice - libcontainer container kubepods-burstable-podb41a9d0d6837414a8ba2c8032064ef47.slice. May 13 23:53:13.285430 kubelet[2247]: E0513 23:53:13.285405 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.287808 systemd[1]: Created slice kubepods-burstable-pod8e48d2a5a244fb5a83d4346ecb31e2e0.slice - libcontainer container kubepods-burstable-pod8e48d2a5a244fb5a83d4346ecb31e2e0.slice. May 13 23:53:13.289648 kubelet[2247]: E0513 23:53:13.289620 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.315060 kubelet[2247]: I0513 23:53:13.314939 2247 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.316961 kubelet[2247]: E0513 23:53:13.316934 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.15.248:6443/api/v1/nodes\": dial tcp 137.184.15.248:6443: connect: connection refused" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.345560 kubelet[2247]: E0513 23:53:13.345510 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.15.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c1d987daf9?timeout=10s\": dial tcp 137.184.15.248:6443: connect: connection refused" interval="400ms" May 13 23:53:13.442047 kubelet[2247]: I0513 23:53:13.441998 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442047 kubelet[2247]: I0513 23:53:13.442046 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442047 kubelet[2247]: I0513 23:53:13.442070 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442266 kubelet[2247]: I0513 23:53:13.442108 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442266 kubelet[2247]: I0513 23:53:13.442132 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442266 kubelet[2247]: I0513 23:53:13.442147 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e48d2a5a244fb5a83d4346ecb31e2e0-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-c1d987daf9\" (UID: \"8e48d2a5a244fb5a83d4346ecb31e2e0\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442266 kubelet[2247]: I0513 23:53:13.442163 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442266 kubelet[2247]: I0513 23:53:13.442179 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.442388 kubelet[2247]: I0513 23:53:13.442198 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.518750 kubelet[2247]: I0513 23:53:13.518670 2247 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.519147 kubelet[2247]: E0513 23:53:13.519051 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.15.248:6443/api/v1/nodes\": dial tcp 137.184.15.248:6443: connect: connection refused" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.581895 kubelet[2247]: E0513 23:53:13.581664 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.582686 containerd[1481]: time="2025-05-13T23:53:13.582652924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-c1d987daf9,Uid:6cecd52a8b1038a47f84cb8afaee8575,Namespace:kube-system,Attempt:0,}" May 13 23:53:13.586744 kubelet[2247]: E0513 23:53:13.586698 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.587732 containerd[1481]: time="2025-05-13T23:53:13.587603013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-c1d987daf9,Uid:b41a9d0d6837414a8ba2c8032064ef47,Namespace:kube-system,Attempt:0,}" May 13 23:53:13.590991 kubelet[2247]: E0513 23:53:13.590792 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.591382 containerd[1481]: time="2025-05-13T23:53:13.591201712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-c1d987daf9,Uid:8e48d2a5a244fb5a83d4346ecb31e2e0,Namespace:kube-system,Attempt:0,}" May 13 23:53:13.670348 containerd[1481]: time="2025-05-13T23:53:13.670305179Z" level=info msg="connecting to shim 2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2" address="unix:///run/containerd/s/ec1c9871dc9588728da30914ab23a49703d0befef2fd32547d09afe609f0c852" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:13.674739 containerd[1481]: time="2025-05-13T23:53:13.674167093Z" level=info msg="connecting to shim 6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41" address="unix:///run/containerd/s/4ea7d284fa73a98df4f920907a7c74dca5b0cf7e29384cd937c7fe924b8c8a6c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:13.676793 containerd[1481]: time="2025-05-13T23:53:13.676759326Z" level=info msg="connecting to shim a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a" address="unix:///run/containerd/s/2c277e63b56ddad5c555411cef33737831891987005662a36e025ca79f3f6eff" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:13.746865 kubelet[2247]: E0513 23:53:13.746276 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.15.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-c1d987daf9?timeout=10s\": dial tcp 137.184.15.248:6443: connect: connection refused" interval="800ms" May 13 23:53:13.764943 systemd[1]: Started cri-containerd-6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41.scope - libcontainer container 6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41. May 13 23:53:13.771804 systemd[1]: Started cri-containerd-2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2.scope - libcontainer container 2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2. May 13 23:53:13.773846 systemd[1]: Started cri-containerd-a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a.scope - libcontainer container a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a. May 13 23:53:13.840966 containerd[1481]: time="2025-05-13T23:53:13.840151106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-c1d987daf9,Uid:b41a9d0d6837414a8ba2c8032064ef47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41\"" May 13 23:53:13.843412 kubelet[2247]: E0513 23:53:13.843391 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.851559 containerd[1481]: time="2025-05-13T23:53:13.851524434Z" level=info msg="CreateContainer within sandbox \"6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:53:13.862550 containerd[1481]: time="2025-05-13T23:53:13.862512469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-c1d987daf9,Uid:6cecd52a8b1038a47f84cb8afaee8575,Namespace:kube-system,Attempt:0,} returns sandbox id \"2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2\"" May 13 23:53:13.863319 kubelet[2247]: E0513 23:53:13.863290 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.865095 containerd[1481]: time="2025-05-13T23:53:13.865066219Z" level=info msg="CreateContainer within sandbox \"2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:53:13.867224 containerd[1481]: time="2025-05-13T23:53:13.867198814Z" level=info msg="Container 8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:13.870823 containerd[1481]: time="2025-05-13T23:53:13.870790265Z" level=info msg="Container d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:13.887920 containerd[1481]: time="2025-05-13T23:53:13.887890922Z" level=info msg="CreateContainer within sandbox \"2701466780be28d52f678fde80b0f8e4099eb57787a48b65c5cf67ec62acb6f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e\"" May 13 23:53:13.888845 containerd[1481]: time="2025-05-13T23:53:13.888207331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-c1d987daf9,Uid:8e48d2a5a244fb5a83d4346ecb31e2e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a\"" May 13 23:53:13.888989 containerd[1481]: time="2025-05-13T23:53:13.888610080Z" level=info msg="CreateContainer within sandbox \"6adcbf8c2aec8ae8e2d57949a4edcd665314d3f7c099fe048193628254b35e41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b\"" May 13 23:53:13.889810 containerd[1481]: time="2025-05-13T23:53:13.889784270Z" level=info msg="StartContainer for \"d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e\"" May 13 23:53:13.890273 containerd[1481]: time="2025-05-13T23:53:13.890239267Z" level=info msg="StartContainer for \"8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b\"" May 13 23:53:13.890929 kubelet[2247]: E0513 23:53:13.890760 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:13.894060 containerd[1481]: time="2025-05-13T23:53:13.894033029Z" level=info msg="connecting to shim d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e" address="unix:///run/containerd/s/ec1c9871dc9588728da30914ab23a49703d0befef2fd32547d09afe609f0c852" protocol=ttrpc version=3 May 13 23:53:13.894484 containerd[1481]: time="2025-05-13T23:53:13.894459658Z" level=info msg="connecting to shim 8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b" address="unix:///run/containerd/s/4ea7d284fa73a98df4f920907a7c74dca5b0cf7e29384cd937c7fe924b8c8a6c" protocol=ttrpc version=3 May 13 23:53:13.901118 containerd[1481]: time="2025-05-13T23:53:13.901092026Z" level=info msg="CreateContainer within sandbox \"a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:53:13.910654 containerd[1481]: time="2025-05-13T23:53:13.910448191Z" level=info msg="Container 7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:13.920871 systemd[1]: Started cri-containerd-8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b.scope - libcontainer container 8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b. May 13 23:53:13.922560 kubelet[2247]: I0513 23:53:13.921723 2247 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.922560 kubelet[2247]: E0513 23:53:13.922166 2247 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://137.184.15.248:6443/api/v1/nodes\": dial tcp 137.184.15.248:6443: connect: connection refused" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:13.931609 containerd[1481]: time="2025-05-13T23:53:13.931560171Z" level=info msg="CreateContainer within sandbox \"a35010c0bab92a20bb58dbe58f1ad90ff25599a6f837713a472ecafe0305d64a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159\"" May 13 23:53:13.932081 systemd[1]: Started cri-containerd-d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e.scope - libcontainer container d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e. May 13 23:53:13.935038 containerd[1481]: time="2025-05-13T23:53:13.935005008Z" level=info msg="StartContainer for \"7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159\"" May 13 23:53:13.937305 containerd[1481]: time="2025-05-13T23:53:13.937172729Z" level=info msg="connecting to shim 7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159" address="unix:///run/containerd/s/2c277e63b56ddad5c555411cef33737831891987005662a36e025ca79f3f6eff" protocol=ttrpc version=3 May 13 23:53:13.964858 systemd[1]: Started cri-containerd-7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159.scope - libcontainer container 7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159. May 13 23:53:14.024183 containerd[1481]: time="2025-05-13T23:53:14.023602431Z" level=info msg="StartContainer for \"8ebdca0fb72847ecee43d6471542bdfedc3acee0f7f0776b1c9211a0bf87804b\" returns successfully" May 13 23:53:14.039024 containerd[1481]: time="2025-05-13T23:53:14.038988090Z" level=info msg="StartContainer for \"7fedfaf714a0934359a59e26d46c68d9daed7240a600362e0ec7bdf4181c6159\" returns successfully" May 13 23:53:14.046648 containerd[1481]: time="2025-05-13T23:53:14.046582403Z" level=info msg="StartContainer for \"d1a1399288b82203a772105ccec0faf1056dd003b35cff81320354ece0298e9e\" returns successfully" May 13 23:53:14.188832 kubelet[2247]: E0513 23:53:14.188418 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:14.188832 kubelet[2247]: E0513 23:53:14.188626 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:14.192289 kubelet[2247]: E0513 23:53:14.192072 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:14.192289 kubelet[2247]: E0513 23:53:14.192206 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:14.196407 kubelet[2247]: E0513 23:53:14.196181 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:14.196407 kubelet[2247]: E0513 23:53:14.196289 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:14.724223 kubelet[2247]: I0513 23:53:14.724052 2247 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:15.196650 kubelet[2247]: E0513 23:53:15.196298 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:15.196650 kubelet[2247]: E0513 23:53:15.196428 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:15.198785 kubelet[2247]: E0513 23:53:15.198561 2247 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:15.198785 kubelet[2247]: E0513 23:53:15.198689 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:16.139608 kubelet[2247]: E0513 23:53:16.139574 2247 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-c1d987daf9\" not found" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.221988 kubelet[2247]: I0513 23:53:16.221953 2247 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.245541 kubelet[2247]: I0513 23:53:16.245510 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.257499 kubelet[2247]: E0513 23:53:16.257047 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.257499 kubelet[2247]: I0513 23:53:16.257079 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.259347 kubelet[2247]: E0513 23:53:16.259043 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.259347 kubelet[2247]: I0513 23:53:16.259067 2247 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" May 13 23:53:16.262840 kubelet[2247]: E0513 23:53:16.262786 2247 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-c1d987daf9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" May 13 23:53:17.125669 kubelet[2247]: I0513 23:53:17.125626 2247 apiserver.go:52] "Watching apiserver" May 13 23:53:17.141539 kubelet[2247]: I0513 23:53:17.141491 2247 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:53:18.009021 systemd[1]: Reload requested from client PID 2516 ('systemctl') (unit session-7.scope)... May 13 23:53:18.009036 systemd[1]: Reloading... May 13 23:53:18.104764 zram_generator::config[2560]: No configuration found. May 13 23:53:18.220385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:53:18.341079 systemd[1]: Reloading finished in 331 ms. May 13 23:53:18.366691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:18.387084 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:53:18.387331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:18.387413 systemd[1]: kubelet.service: Consumed 890ms CPU time, 122.6M memory peak. May 13 23:53:18.389263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:18.532232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:18.541850 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:53:18.594861 kubelet[2611]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:53:18.594861 kubelet[2611]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:53:18.594861 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:53:18.595284 kubelet[2611]: I0513 23:53:18.595105 2611 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:53:18.606311 kubelet[2611]: I0513 23:53:18.606006 2611 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:53:18.606311 kubelet[2611]: I0513 23:53:18.606028 2611 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:53:18.606546 kubelet[2611]: I0513 23:53:18.606532 2611 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:53:18.608661 kubelet[2611]: I0513 23:53:18.608448 2611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:53:18.618473 kubelet[2611]: I0513 23:53:18.618414 2611 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:53:18.624512 kubelet[2611]: I0513 23:53:18.624492 2611 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:53:18.627749 kubelet[2611]: I0513 23:53:18.627731 2611 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:53:18.629818 kubelet[2611]: I0513 23:53:18.628019 2611 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:53:18.629818 kubelet[2611]: I0513 23:53:18.628040 2611 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-c1d987daf9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:53:18.629818 kubelet[2611]: I0513 23:53:18.628263 2611 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:53:18.629818 kubelet[2611]: I0513 23:53:18.628272 2611 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:53:18.630012 kubelet[2611]: I0513 23:53:18.628312 2611 state_mem.go:36] "Initialized new in-memory state store" May 13 23:53:18.630012 kubelet[2611]: I0513 23:53:18.628444 2611 kubelet.go:446] "Attempting to sync node with API server" May 13 23:53:18.630012 kubelet[2611]: I0513 23:53:18.628456 2611 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:53:18.630012 kubelet[2611]: I0513 23:53:18.628476 2611 kubelet.go:352] "Adding apiserver pod source" May 13 23:53:18.630012 kubelet[2611]: I0513 23:53:18.628486 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:53:18.639822 kubelet[2611]: I0513 23:53:18.639794 2611 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:53:18.640337 kubelet[2611]: I0513 23:53:18.640162 2611 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:53:18.641346 kubelet[2611]: I0513 23:53:18.640820 2611 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:53:18.641346 kubelet[2611]: I0513 23:53:18.640867 2611 server.go:1287] "Started kubelet" May 13 23:53:18.643529 kubelet[2611]: I0513 23:53:18.643497 2611 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:53:18.643905 kubelet[2611]: I0513 23:53:18.643893 2611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:53:18.644594 kubelet[2611]: I0513 23:53:18.644563 2611 server.go:490] "Adding debug handlers to kubelet server" May 13 23:53:18.645503 kubelet[2611]: I0513 23:53:18.645443 2611 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:53:18.645648 kubelet[2611]: I0513 23:53:18.645633 2611 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:53:18.648311 kubelet[2611]: I0513 23:53:18.648182 2611 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:53:18.654176 kubelet[2611]: I0513 23:53:18.653167 2611 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:53:18.654176 kubelet[2611]: I0513 23:53:18.653254 2611 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:53:18.654176 kubelet[2611]: I0513 23:53:18.653352 2611 reconciler.go:26] "Reconciler: start to sync state" May 13 23:53:18.655303 kubelet[2611]: I0513 23:53:18.655284 2611 factory.go:221] Registration of the systemd container factory successfully May 13 23:53:18.655470 kubelet[2611]: I0513 23:53:18.655383 2611 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:53:18.658126 kubelet[2611]: I0513 23:53:18.658009 2611 factory.go:221] Registration of the containerd container factory successfully May 13 23:53:18.659349 kubelet[2611]: E0513 23:53:18.659333 2611 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:53:18.665549 kubelet[2611]: I0513 23:53:18.665481 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:53:18.666914 kubelet[2611]: I0513 23:53:18.666889 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:53:18.667007 kubelet[2611]: I0513 23:53:18.666940 2611 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:53:18.667007 kubelet[2611]: I0513 23:53:18.666958 2611 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:53:18.667007 kubelet[2611]: I0513 23:53:18.666965 2611 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:53:18.667083 kubelet[2611]: E0513 23:53:18.667028 2611 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:53:18.701590 kubelet[2611]: I0513 23:53:18.701556 2611 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:53:18.701590 kubelet[2611]: I0513 23:53:18.701575 2611 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:53:18.701590 kubelet[2611]: I0513 23:53:18.701601 2611 state_mem.go:36] "Initialized new in-memory state store" May 13 23:53:18.701788 kubelet[2611]: I0513 23:53:18.701773 2611 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:53:18.701817 kubelet[2611]: I0513 23:53:18.701784 2611 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:53:18.701817 kubelet[2611]: I0513 23:53:18.701804 2611 policy_none.go:49] "None policy: Start" May 13 23:53:18.701871 kubelet[2611]: I0513 23:53:18.701825 2611 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:53:18.701871 kubelet[2611]: I0513 23:53:18.701838 2611 state_mem.go:35] "Initializing new in-memory state store" May 13 23:53:18.701953 kubelet[2611]: I0513 23:53:18.701936 2611 state_mem.go:75] "Updated machine memory state" May 13 23:53:18.705567 kubelet[2611]: I0513 23:53:18.705458 2611 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:53:18.705660 kubelet[2611]: I0513 23:53:18.705614 2611 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:53:18.705660 kubelet[2611]: I0513 23:53:18.705627 2611 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:53:18.706462 kubelet[2611]: I0513 23:53:18.706039 2611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:53:18.707814 kubelet[2611]: E0513 23:53:18.707795 2611 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:53:18.767848 kubelet[2611]: I0513 23:53:18.767790 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.769213 kubelet[2611]: I0513 23:53:18.768849 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.769213 kubelet[2611]: I0513 23:53:18.768992 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.774755 kubelet[2611]: W0513 23:53:18.774733 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:53:18.775729 kubelet[2611]: W0513 23:53:18.775669 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:53:18.776342 kubelet[2611]: W0513 23:53:18.776124 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:53:18.809774 kubelet[2611]: I0513 23:53:18.809432 2611 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.817185 kubelet[2611]: I0513 23:53:18.817138 2611 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.817320 kubelet[2611]: I0513 23:53:18.817222 2611 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.854912 kubelet[2611]: I0513 23:53:18.854765 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.854912 kubelet[2611]: I0513 23:53:18.854831 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.854912 kubelet[2611]: I0513 23:53:18.854867 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.854912 kubelet[2611]: I0513 23:53:18.854891 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.855118 kubelet[2611]: I0513 23:53:18.854926 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.855118 kubelet[2611]: I0513 23:53:18.854941 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b41a9d0d6837414a8ba2c8032064ef47-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-c1d987daf9\" (UID: \"b41a9d0d6837414a8ba2c8032064ef47\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.855118 kubelet[2611]: I0513 23:53:18.854962 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e48d2a5a244fb5a83d4346ecb31e2e0-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-c1d987daf9\" (UID: \"8e48d2a5a244fb5a83d4346ecb31e2e0\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.855118 kubelet[2611]: I0513 23:53:18.854977 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:18.855118 kubelet[2611]: I0513 23:53:18.854993 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cecd52a8b1038a47f84cb8afaee8575-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" (UID: \"6cecd52a8b1038a47f84cb8afaee8575\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:19.076568 kubelet[2611]: E0513 23:53:19.075490 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.076568 kubelet[2611]: E0513 23:53:19.075965 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.077111 kubelet[2611]: E0513 23:53:19.076912 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.630886 kubelet[2611]: I0513 23:53:19.630845 2611 apiserver.go:52] "Watching apiserver" May 13 23:53:19.653588 kubelet[2611]: I0513 23:53:19.653423 2611 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:53:19.687775 kubelet[2611]: E0513 23:53:19.687120 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.687775 kubelet[2611]: I0513 23:53:19.687362 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:19.688002 kubelet[2611]: E0513 23:53:19.687988 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.734498 kubelet[2611]: W0513 23:53:19.734238 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:53:19.734498 kubelet[2611]: E0513 23:53:19.734305 2611 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-c1d987daf9\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" May 13 23:53:19.734498 kubelet[2611]: E0513 23:53:19.734487 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:19.794187 kubelet[2611]: I0513 23:53:19.794003 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-c1d987daf9" podStartSLOduration=1.793949004 podStartE2EDuration="1.793949004s" podCreationTimestamp="2025-05-13 23:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:19.79116279 +0000 UTC m=+1.244872453" watchObservedRunningTime="2025-05-13 23:53:19.793949004 +0000 UTC m=+1.247658670" May 13 23:53:19.814074 kubelet[2611]: I0513 23:53:19.813802 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-c1d987daf9" podStartSLOduration=1.813779892 podStartE2EDuration="1.813779892s" podCreationTimestamp="2025-05-13 23:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:19.813435782 +0000 UTC m=+1.267145460" watchObservedRunningTime="2025-05-13 23:53:19.813779892 +0000 UTC m=+1.267489564" May 13 23:53:19.861775 kubelet[2611]: I0513 23:53:19.861243 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-c1d987daf9" podStartSLOduration=1.861225001 podStartE2EDuration="1.861225001s" podCreationTimestamp="2025-05-13 23:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:19.848206439 +0000 UTC m=+1.301916114" watchObservedRunningTime="2025-05-13 23:53:19.861225001 +0000 UTC m=+1.314934851" May 13 23:53:20.691184 kubelet[2611]: E0513 23:53:20.690117 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:20.691184 kubelet[2611]: E0513 23:53:20.690151 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:20.693071 kubelet[2611]: E0513 23:53:20.693000 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:21.691560 kubelet[2611]: E0513 23:53:21.691487 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:23.807527 systemd[1]: Created slice kubepods-besteffort-pod386cdb10_d6e2_4851_9d6a_056d43be79f4.slice - libcontainer container kubepods-besteffort-pod386cdb10_d6e2_4851_9d6a_056d43be79f4.slice. May 13 23:53:23.823026 sudo[1686]: pam_unix(sudo:session): session closed for user root May 13 23:53:23.827590 sshd[1685]: Connection closed by 147.75.109.163 port 43452 May 13 23:53:23.827576 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 13 23:53:23.831466 systemd[1]: sshd@6-137.184.15.248:22-147.75.109.163:43452.service: Deactivated successfully. May 13 23:53:23.834799 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:53:23.836044 systemd[1]: session-7.scope: Consumed 4.534s CPU time, 161.6M memory peak. May 13 23:53:23.838629 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. May 13 23:53:23.841383 systemd-logind[1464]: Removed session 7. May 13 23:53:23.868086 kubelet[2611]: I0513 23:53:23.868052 2611 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:53:23.868767 containerd[1481]: time="2025-05-13T23:53:23.868657863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:53:23.871014 kubelet[2611]: I0513 23:53:23.869048 2611 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:53:23.893882 kubelet[2611]: I0513 23:53:23.893837 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386cdb10-d6e2-4851-9d6a-056d43be79f4-xtables-lock\") pod \"kube-proxy-njcv7\" (UID: \"386cdb10-d6e2-4851-9d6a-056d43be79f4\") " pod="kube-system/kube-proxy-njcv7" May 13 23:53:23.893882 kubelet[2611]: I0513 23:53:23.893881 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rt6x\" (UniqueName: \"kubernetes.io/projected/386cdb10-d6e2-4851-9d6a-056d43be79f4-kube-api-access-6rt6x\") pod \"kube-proxy-njcv7\" (UID: \"386cdb10-d6e2-4851-9d6a-056d43be79f4\") " pod="kube-system/kube-proxy-njcv7" May 13 23:53:23.894052 kubelet[2611]: I0513 23:53:23.893914 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/386cdb10-d6e2-4851-9d6a-056d43be79f4-kube-proxy\") pod \"kube-proxy-njcv7\" (UID: \"386cdb10-d6e2-4851-9d6a-056d43be79f4\") " pod="kube-system/kube-proxy-njcv7" May 13 23:53:23.894052 kubelet[2611]: I0513 23:53:23.893932 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386cdb10-d6e2-4851-9d6a-056d43be79f4-lib-modules\") pod \"kube-proxy-njcv7\" (UID: \"386cdb10-d6e2-4851-9d6a-056d43be79f4\") " pod="kube-system/kube-proxy-njcv7" May 13 23:53:24.004055 kubelet[2611]: E0513 23:53:24.003934 2611 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:53:24.004055 kubelet[2611]: E0513 23:53:24.003976 2611 projected.go:194] Error preparing data for projected volume kube-api-access-6rt6x for pod kube-system/kube-proxy-njcv7: configmap "kube-root-ca.crt" not found May 13 23:53:24.004055 kubelet[2611]: E0513 23:53:24.004049 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/386cdb10-d6e2-4851-9d6a-056d43be79f4-kube-api-access-6rt6x podName:386cdb10-d6e2-4851-9d6a-056d43be79f4 nodeName:}" failed. No retries permitted until 2025-05-13 23:53:24.504027748 +0000 UTC m=+5.957737417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6rt6x" (UniqueName: "kubernetes.io/projected/386cdb10-d6e2-4851-9d6a-056d43be79f4-kube-api-access-6rt6x") pod "kube-proxy-njcv7" (UID: "386cdb10-d6e2-4851-9d6a-056d43be79f4") : configmap "kube-root-ca.crt" not found May 13 23:53:24.713905 kubelet[2611]: E0513 23:53:24.713869 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:24.716175 containerd[1481]: time="2025-05-13T23:53:24.715760024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njcv7,Uid:386cdb10-d6e2-4851-9d6a-056d43be79f4,Namespace:kube-system,Attempt:0,}" May 13 23:53:24.732466 containerd[1481]: time="2025-05-13T23:53:24.732423298Z" level=info msg="connecting to shim 3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf" address="unix:///run/containerd/s/ecf266fb67be5cf1bfcf003a0662d06a409dff8ed12128d5a74ec0744b1d44d4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:24.764897 systemd[1]: Started cri-containerd-3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf.scope - libcontainer container 3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf. May 13 23:53:24.793089 containerd[1481]: time="2025-05-13T23:53:24.792930989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njcv7,Uid:386cdb10-d6e2-4851-9d6a-056d43be79f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf\"" May 13 23:53:24.794304 kubelet[2611]: E0513 23:53:24.793835 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:24.798279 containerd[1481]: time="2025-05-13T23:53:24.798098780Z" level=info msg="CreateContainer within sandbox \"3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:53:24.812080 containerd[1481]: time="2025-05-13T23:53:24.809796978Z" level=info msg="Container 781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:24.821691 containerd[1481]: time="2025-05-13T23:53:24.821648815Z" level=info msg="CreateContainer within sandbox \"3e70b10bdec9db12f2b30325153ebc6794c5e59f19ebf45d1f0373cbbe629bcf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c\"" May 13 23:53:24.822277 containerd[1481]: time="2025-05-13T23:53:24.822225226Z" level=info msg="StartContainer for \"781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c\"" May 13 23:53:24.824696 containerd[1481]: time="2025-05-13T23:53:24.824666604Z" level=info msg="connecting to shim 781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c" address="unix:///run/containerd/s/ecf266fb67be5cf1bfcf003a0662d06a409dff8ed12128d5a74ec0744b1d44d4" protocol=ttrpc version=3 May 13 23:53:24.844941 systemd[1]: Started cri-containerd-781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c.scope - libcontainer container 781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c. May 13 23:53:24.878743 systemd[1]: Created slice kubepods-besteffort-pod51e1d202_bdb5_4787_bbbb_6f4ed88e9748.slice - libcontainer container kubepods-besteffort-pod51e1d202_bdb5_4787_bbbb_6f4ed88e9748.slice. May 13 23:53:24.901616 kubelet[2611]: I0513 23:53:24.901571 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51e1d202-bdb5-4787-bbbb-6f4ed88e9748-var-lib-calico\") pod \"tigera-operator-789496d6f5-rgx6f\" (UID: \"51e1d202-bdb5-4787-bbbb-6f4ed88e9748\") " pod="tigera-operator/tigera-operator-789496d6f5-rgx6f" May 13 23:53:24.902485 kubelet[2611]: I0513 23:53:24.902289 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scjq5\" (UniqueName: \"kubernetes.io/projected/51e1d202-bdb5-4787-bbbb-6f4ed88e9748-kube-api-access-scjq5\") pod \"tigera-operator-789496d6f5-rgx6f\" (UID: \"51e1d202-bdb5-4787-bbbb-6f4ed88e9748\") " pod="tigera-operator/tigera-operator-789496d6f5-rgx6f" May 13 23:53:24.939744 containerd[1481]: time="2025-05-13T23:53:24.939338169Z" level=info msg="StartContainer for \"781a59844fad372cbc5e0184689a0a3d11d0376df21e77f744dee59a200ba35c\" returns successfully" May 13 23:53:25.070614 kubelet[2611]: E0513 23:53:25.069961 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:25.183767 containerd[1481]: time="2025-05-13T23:53:25.183550645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rgx6f,Uid:51e1d202-bdb5-4787-bbbb-6f4ed88e9748,Namespace:tigera-operator,Attempt:0,}" May 13 23:53:25.203267 containerd[1481]: time="2025-05-13T23:53:25.203032897Z" level=info msg="connecting to shim 27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68" address="unix:///run/containerd/s/5e71e7278f9b0d373a5f5eca57cbadb6473a3ee2982207cb8c0fad459e72def7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:25.213640 systemd[1]: Started sshd@7-137.184.15.248:22-218.92.0.154:45431.service - OpenSSH per-connection server daemon (218.92.0.154:45431). May 13 23:53:25.229883 systemd[1]: Started cri-containerd-27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68.scope - libcontainer container 27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68. May 13 23:53:25.299441 containerd[1481]: time="2025-05-13T23:53:25.299274279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rgx6f,Uid:51e1d202-bdb5-4787-bbbb-6f4ed88e9748,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68\"" May 13 23:53:25.303857 containerd[1481]: time="2025-05-13T23:53:25.303830951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:53:25.306587 systemd-resolved[1332]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 13 23:53:25.702288 kubelet[2611]: E0513 23:53:25.702255 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:25.703689 kubelet[2611]: E0513 23:53:25.702973 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:25.723100 kubelet[2611]: I0513 23:53:25.723051 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-njcv7" podStartSLOduration=2.72303206 podStartE2EDuration="2.72303206s" podCreationTimestamp="2025-05-13 23:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:25.722928097 +0000 UTC m=+7.176637771" watchObservedRunningTime="2025-05-13 23:53:25.72303206 +0000 UTC m=+7.176741729" May 13 23:53:26.270384 sshd-session[2949]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 13 23:53:26.694950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576257643.mount: Deactivated successfully. May 13 23:53:26.706111 kubelet[2611]: E0513 23:53:26.705211 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:27.140376 containerd[1481]: time="2025-05-13T23:53:27.139800110Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:27.141251 containerd[1481]: time="2025-05-13T23:53:27.141051590Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 23:53:27.141741 containerd[1481]: time="2025-05-13T23:53:27.141621442Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:27.146853 containerd[1481]: time="2025-05-13T23:53:27.146825062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:27.149738 containerd[1481]: time="2025-05-13T23:53:27.149449301Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.845536265s" May 13 23:53:27.149738 containerd[1481]: time="2025-05-13T23:53:27.149484701Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 23:53:27.153803 containerd[1481]: time="2025-05-13T23:53:27.153777078Z" level=info msg="CreateContainer within sandbox \"27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:53:27.160795 containerd[1481]: time="2025-05-13T23:53:27.157596933Z" level=info msg="Container d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:27.165248 containerd[1481]: time="2025-05-13T23:53:27.165219042Z" level=info msg="CreateContainer within sandbox \"27215b3c4c662f270351d2fd7e0b28aad398c8a2a8c34b16e007b650f50e6d68\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2\"" May 13 23:53:27.166193 containerd[1481]: time="2025-05-13T23:53:27.166174180Z" level=info msg="StartContainer for \"d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2\"" May 13 23:53:27.168188 containerd[1481]: time="2025-05-13T23:53:27.168130003Z" level=info msg="connecting to shim d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2" address="unix:///run/containerd/s/5e71e7278f9b0d373a5f5eca57cbadb6473a3ee2982207cb8c0fad459e72def7" protocol=ttrpc version=3 May 13 23:53:27.191870 systemd[1]: Started cri-containerd-d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2.scope - libcontainer container d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2. May 13 23:53:27.220771 containerd[1481]: time="2025-05-13T23:53:27.219941241Z" level=info msg="StartContainer for \"d8a42c04f9adf3a8f29167caff808f117f3062fa3c8b3dc52dc4c7bf16d66af2\" returns successfully" May 13 23:53:27.948201 sshd[2831]: PAM: Permission denied for root from 218.92.0.154 May 13 23:53:28.105995 kubelet[2611]: E0513 23:53:28.104508 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:28.120321 kubelet[2611]: I0513 23:53:28.120155 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-rgx6f" podStartSLOduration=2.2708645179999998 podStartE2EDuration="4.120138892s" podCreationTimestamp="2025-05-13 23:53:24 +0000 UTC" firstStartedPulling="2025-05-13 23:53:25.301350026 +0000 UTC m=+6.755059693" lastFinishedPulling="2025-05-13 23:53:27.150624413 +0000 UTC m=+8.604334067" observedRunningTime="2025-05-13 23:53:27.720262217 +0000 UTC m=+9.173971891" watchObservedRunningTime="2025-05-13 23:53:28.120138892 +0000 UTC m=+9.573848567" May 13 23:53:28.224952 sshd-session[2993]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 13 23:53:28.712285 kubelet[2611]: E0513 23:53:28.711863 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:29.714676 kubelet[2611]: E0513 23:53:29.713473 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:29.841846 sshd[2831]: PAM: Permission denied for root from 218.92.0.154 May 13 23:53:30.117843 sshd-session[2995]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.154 user=root May 13 23:53:30.275465 kubelet[2611]: W0513 23:53:30.275430 2611 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284.0.0-n-c1d987daf9" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object May 13 23:53:30.275599 kubelet[2611]: E0513 23:53:30.275475 2611 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284.0.0-n-c1d987daf9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object" logger="UnhandledError" May 13 23:53:30.275599 kubelet[2611]: I0513 23:53:30.275517 2611 status_manager.go:890] "Failed to get status for pod" podUID="af72bf5b-9236-44fb-bf62-a6b5eabc2092" pod="calico-system/calico-typha-656bd56cbc-4zp45" err="pods \"calico-typha-656bd56cbc-4zp45\" is forbidden: User \"system:node:ci-4284.0.0-n-c1d987daf9\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object" May 13 23:53:30.275599 kubelet[2611]: W0513 23:53:30.275560 2611 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4284.0.0-n-c1d987daf9" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object May 13 23:53:30.275599 kubelet[2611]: E0513 23:53:30.275572 2611 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4284.0.0-n-c1d987daf9\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object" logger="UnhandledError" May 13 23:53:30.275738 kubelet[2611]: W0513 23:53:30.275606 2611 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4284.0.0-n-c1d987daf9" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object May 13 23:53:30.275738 kubelet[2611]: E0513 23:53:30.275615 2611 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4284.0.0-n-c1d987daf9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object" logger="UnhandledError" May 13 23:53:30.281709 systemd[1]: Created slice kubepods-besteffort-podaf72bf5b_9236_44fb_bf62_a6b5eabc2092.slice - libcontainer container kubepods-besteffort-podaf72bf5b_9236_44fb_bf62_a6b5eabc2092.slice. May 13 23:53:30.289573 kubelet[2611]: E0513 23:53:30.289542 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:30.333906 kubelet[2611]: I0513 23:53:30.333861 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/af72bf5b-9236-44fb-bf62-a6b5eabc2092-typha-certs\") pod \"calico-typha-656bd56cbc-4zp45\" (UID: \"af72bf5b-9236-44fb-bf62-a6b5eabc2092\") " pod="calico-system/calico-typha-656bd56cbc-4zp45" May 13 23:53:30.333906 kubelet[2611]: I0513 23:53:30.333905 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af72bf5b-9236-44fb-bf62-a6b5eabc2092-tigera-ca-bundle\") pod \"calico-typha-656bd56cbc-4zp45\" (UID: \"af72bf5b-9236-44fb-bf62-a6b5eabc2092\") " pod="calico-system/calico-typha-656bd56cbc-4zp45" May 13 23:53:30.334144 kubelet[2611]: I0513 23:53:30.333928 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6db\" (UniqueName: \"kubernetes.io/projected/af72bf5b-9236-44fb-bf62-a6b5eabc2092-kube-api-access-lc6db\") pod \"calico-typha-656bd56cbc-4zp45\" (UID: \"af72bf5b-9236-44fb-bf62-a6b5eabc2092\") " pod="calico-system/calico-typha-656bd56cbc-4zp45" May 13 23:53:30.386741 kubelet[2611]: I0513 23:53:30.386608 2611 status_manager.go:890] "Failed to get status for pod" podUID="4a2b4761-602f-414d-86b7-5417cd60ec35" pod="calico-system/calico-node-rjl5s" err="pods \"calico-node-rjl5s\" is forbidden: User \"system:node:ci-4284.0.0-n-c1d987daf9\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4284.0.0-n-c1d987daf9' and this object" May 13 23:53:30.396715 systemd[1]: Created slice kubepods-besteffort-pod4a2b4761_602f_414d_86b7_5417cd60ec35.slice - libcontainer container kubepods-besteffort-pod4a2b4761_602f_414d_86b7_5417cd60ec35.slice. May 13 23:53:30.434467 kubelet[2611]: I0513 23:53:30.434422 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-var-lib-calico\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434467 kubelet[2611]: I0513 23:53:30.434477 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-var-run-calico\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434642 kubelet[2611]: I0513 23:53:30.434493 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-cni-net-dir\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434642 kubelet[2611]: I0513 23:53:30.434512 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tvrs\" (UniqueName: \"kubernetes.io/projected/4a2b4761-602f-414d-86b7-5417cd60ec35-kube-api-access-6tvrs\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434642 kubelet[2611]: I0513 23:53:30.434529 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-policysync\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434642 kubelet[2611]: I0513 23:53:30.434560 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-xtables-lock\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434642 kubelet[2611]: I0513 23:53:30.434598 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4a2b4761-602f-414d-86b7-5417cd60ec35-node-certs\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434787 kubelet[2611]: I0513 23:53:30.434614 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-cni-log-dir\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434787 kubelet[2611]: I0513 23:53:30.434631 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-flexvol-driver-host\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434787 kubelet[2611]: I0513 23:53:30.434656 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a2b4761-602f-414d-86b7-5417cd60ec35-tigera-ca-bundle\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434787 kubelet[2611]: I0513 23:53:30.434670 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-cni-bin-dir\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.434787 kubelet[2611]: I0513 23:53:30.434696 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a2b4761-602f-414d-86b7-5417cd60ec35-lib-modules\") pod \"calico-node-rjl5s\" (UID: \"4a2b4761-602f-414d-86b7-5417cd60ec35\") " pod="calico-system/calico-node-rjl5s" May 13 23:53:30.509259 kubelet[2611]: E0513 23:53:30.509210 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:30.535437 kubelet[2611]: I0513 23:53:30.535396 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeda7aec-b8fa-4b85-baa0-a818865b60ee-registration-dir\") pod \"csi-node-driver-txj64\" (UID: \"eeda7aec-b8fa-4b85-baa0-a818865b60ee\") " pod="calico-system/csi-node-driver-txj64" May 13 23:53:30.535927 kubelet[2611]: I0513 23:53:30.535688 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeda7aec-b8fa-4b85-baa0-a818865b60ee-kubelet-dir\") pod \"csi-node-driver-txj64\" (UID: \"eeda7aec-b8fa-4b85-baa0-a818865b60ee\") " pod="calico-system/csi-node-driver-txj64" May 13 23:53:30.536006 kubelet[2611]: I0513 23:53:30.535939 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeda7aec-b8fa-4b85-baa0-a818865b60ee-socket-dir\") pod \"csi-node-driver-txj64\" (UID: \"eeda7aec-b8fa-4b85-baa0-a818865b60ee\") " pod="calico-system/csi-node-driver-txj64" May 13 23:53:30.536152 kubelet[2611]: I0513 23:53:30.535966 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eeda7aec-b8fa-4b85-baa0-a818865b60ee-varrun\") pod \"csi-node-driver-txj64\" (UID: \"eeda7aec-b8fa-4b85-baa0-a818865b60ee\") " pod="calico-system/csi-node-driver-txj64" May 13 23:53:30.536208 kubelet[2611]: I0513 23:53:30.536156 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrckq\" (UniqueName: \"kubernetes.io/projected/eeda7aec-b8fa-4b85-baa0-a818865b60ee-kube-api-access-xrckq\") pod \"csi-node-driver-txj64\" (UID: \"eeda7aec-b8fa-4b85-baa0-a818865b60ee\") " pod="calico-system/csi-node-driver-txj64" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.538624 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.540744 kubelet[2611]: W0513 23:53:30.538652 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.538678 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.539155 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.540744 kubelet[2611]: W0513 23:53:30.539166 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.539178 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.539638 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.540744 kubelet[2611]: W0513 23:53:30.539649 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.539660 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.540744 kubelet[2611]: E0513 23:53:30.539994 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.541070 kubelet[2611]: W0513 23:53:30.540003 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540014 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540434 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.541070 kubelet[2611]: W0513 23:53:30.540443 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540454 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540617 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.541070 kubelet[2611]: W0513 23:53:30.540625 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540633 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.541070 kubelet[2611]: E0513 23:53:30.540801 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.541070 kubelet[2611]: W0513 23:53:30.540809 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.541414 kubelet[2611]: E0513 23:53:30.540817 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.541414 kubelet[2611]: E0513 23:53:30.541026 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.541414 kubelet[2611]: W0513 23:53:30.541033 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.541414 kubelet[2611]: E0513 23:53:30.541041 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.546976 kubelet[2611]: E0513 23:53:30.545568 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.546976 kubelet[2611]: W0513 23:53:30.545585 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.546976 kubelet[2611]: E0513 23:53:30.545597 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.638920 kubelet[2611]: E0513 23:53:30.637997 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.638920 kubelet[2611]: W0513 23:53:30.638020 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.638920 kubelet[2611]: E0513 23:53:30.638045 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.639068 kubelet[2611]: E0513 23:53:30.638954 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.639068 kubelet[2611]: W0513 23:53:30.638967 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.639068 kubelet[2611]: E0513 23:53:30.638996 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.639521 kubelet[2611]: E0513 23:53:30.639467 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.639602 kubelet[2611]: W0513 23:53:30.639586 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.639633 kubelet[2611]: E0513 23:53:30.639614 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.639974 kubelet[2611]: E0513 23:53:30.639960 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.639974 kubelet[2611]: W0513 23:53:30.639973 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.640048 kubelet[2611]: E0513 23:53:30.640006 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.640369 kubelet[2611]: E0513 23:53:30.640342 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.640460 kubelet[2611]: W0513 23:53:30.640445 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.640490 kubelet[2611]: E0513 23:53:30.640471 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.640929 kubelet[2611]: E0513 23:53:30.640915 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.640964 kubelet[2611]: W0513 23:53:30.640928 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.640964 kubelet[2611]: E0513 23:53:30.640955 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.641366 kubelet[2611]: E0513 23:53:30.641349 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.641403 kubelet[2611]: W0513 23:53:30.641365 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.641536 kubelet[2611]: E0513 23:53:30.641522 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.642046 kubelet[2611]: E0513 23:53:30.642028 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.642046 kubelet[2611]: W0513 23:53:30.642044 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.642126 kubelet[2611]: E0513 23:53:30.642062 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.642415 kubelet[2611]: E0513 23:53:30.642396 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.642415 kubelet[2611]: W0513 23:53:30.642410 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.642492 kubelet[2611]: E0513 23:53:30.642427 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.642683 kubelet[2611]: E0513 23:53:30.642606 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.642683 kubelet[2611]: W0513 23:53:30.642617 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.642683 kubelet[2611]: E0513 23:53:30.642632 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.642955 kubelet[2611]: E0513 23:53:30.642868 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.642955 kubelet[2611]: W0513 23:53:30.642879 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.642955 kubelet[2611]: E0513 23:53:30.642953 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.643190 kubelet[2611]: E0513 23:53:30.643069 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.643190 kubelet[2611]: W0513 23:53:30.643079 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.643190 kubelet[2611]: E0513 23:53:30.643149 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.643351 kubelet[2611]: E0513 23:53:30.643272 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.643351 kubelet[2611]: W0513 23:53:30.643282 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.643351 kubelet[2611]: E0513 23:53:30.643301 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.643588 kubelet[2611]: E0513 23:53:30.643491 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.643588 kubelet[2611]: W0513 23:53:30.643503 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.643588 kubelet[2611]: E0513 23:53:30.643515 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.643822 kubelet[2611]: E0513 23:53:30.643714 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.643822 kubelet[2611]: W0513 23:53:30.643749 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.643822 kubelet[2611]: E0513 23:53:30.643768 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.644084 kubelet[2611]: E0513 23:53:30.644054 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.644084 kubelet[2611]: W0513 23:53:30.644066 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.644084 kubelet[2611]: E0513 23:53:30.644079 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.644581 kubelet[2611]: E0513 23:53:30.644469 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.644581 kubelet[2611]: W0513 23:53:30.644483 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.644581 kubelet[2611]: E0513 23:53:30.644571 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.644910 kubelet[2611]: E0513 23:53:30.644816 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.644910 kubelet[2611]: W0513 23:53:30.644828 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.644910 kubelet[2611]: E0513 23:53:30.644905 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.645167 kubelet[2611]: E0513 23:53:30.645076 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.645167 kubelet[2611]: W0513 23:53:30.645088 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.645167 kubelet[2611]: E0513 23:53:30.645099 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.645335 kubelet[2611]: E0513 23:53:30.645293 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.645335 kubelet[2611]: W0513 23:53:30.645306 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.645335 kubelet[2611]: E0513 23:53:30.645323 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.645656 kubelet[2611]: E0513 23:53:30.645528 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.645656 kubelet[2611]: W0513 23:53:30.645541 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.645656 kubelet[2611]: E0513 23:53:30.645558 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.645848 kubelet[2611]: E0513 23:53:30.645787 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.645848 kubelet[2611]: W0513 23:53:30.645798 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.645848 kubelet[2611]: E0513 23:53:30.645814 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.647135 kubelet[2611]: E0513 23:53:30.647043 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.647135 kubelet[2611]: W0513 23:53:30.647057 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.647135 kubelet[2611]: E0513 23:53:30.647069 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.647332 kubelet[2611]: E0513 23:53:30.647260 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.647332 kubelet[2611]: W0513 23:53:30.647272 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.647332 kubelet[2611]: E0513 23:53:30.647280 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:30.647645 kubelet[2611]: E0513 23:53:30.647461 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:30.647645 kubelet[2611]: W0513 23:53:30.647472 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:30.647645 kubelet[2611]: E0513 23:53:30.647483 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.421742 kubelet[2611]: E0513 23:53:31.419403 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.421742 kubelet[2611]: W0513 23:53:31.419423 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.421742 kubelet[2611]: E0513 23:53:31.419446 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.422422 kubelet[2611]: E0513 23:53:31.422398 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.422526 kubelet[2611]: W0513 23:53:31.422513 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.422593 kubelet[2611]: E0513 23:53:31.422583 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.423938 kubelet[2611]: E0513 23:53:31.423922 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.424048 kubelet[2611]: W0513 23:53:31.424008 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.424048 kubelet[2611]: E0513 23:53:31.424024 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.435900 kubelet[2611]: E0513 23:53:31.435842 2611 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 13 23:53:31.436091 kubelet[2611]: E0513 23:53:31.435934 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af72bf5b-9236-44fb-bf62-a6b5eabc2092-typha-certs podName:af72bf5b-9236-44fb-bf62-a6b5eabc2092 nodeName:}" failed. No retries permitted until 2025-05-13 23:53:31.935914032 +0000 UTC m=+13.389623700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/af72bf5b-9236-44fb-bf62-a6b5eabc2092-typha-certs") pod "calico-typha-656bd56cbc-4zp45" (UID: "af72bf5b-9236-44fb-bf62-a6b5eabc2092") : failed to sync secret cache: timed out waiting for the condition May 13 23:53:31.436091 kubelet[2611]: E0513 23:53:31.435853 2611 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 13 23:53:31.437162 kubelet[2611]: E0513 23:53:31.437092 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af72bf5b-9236-44fb-bf62-a6b5eabc2092-tigera-ca-bundle podName:af72bf5b-9236-44fb-bf62-a6b5eabc2092 nodeName:}" failed. No retries permitted until 2025-05-13 23:53:31.936006878 +0000 UTC m=+13.389716535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/af72bf5b-9236-44fb-bf62-a6b5eabc2092-tigera-ca-bundle") pod "calico-typha-656bd56cbc-4zp45" (UID: "af72bf5b-9236-44fb-bf62-a6b5eabc2092") : failed to sync configmap cache: timed out waiting for the condition May 13 23:53:31.450395 kubelet[2611]: E0513 23:53:31.450364 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.450395 kubelet[2611]: W0513 23:53:31.450388 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.450540 kubelet[2611]: E0513 23:53:31.450410 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.450841 kubelet[2611]: E0513 23:53:31.450638 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.450841 kubelet[2611]: W0513 23:53:31.450649 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.450841 kubelet[2611]: E0513 23:53:31.450660 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.537497 kubelet[2611]: E0513 23:53:31.537092 2611 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 13 23:53:31.537934 kubelet[2611]: E0513 23:53:31.537773 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a2b4761-602f-414d-86b7-5417cd60ec35-tigera-ca-bundle podName:4a2b4761-602f-414d-86b7-5417cd60ec35 nodeName:}" failed. No retries permitted until 2025-05-13 23:53:32.037744661 +0000 UTC m=+13.491454327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/4a2b4761-602f-414d-86b7-5417cd60ec35-tigera-ca-bundle") pod "calico-node-rjl5s" (UID: "4a2b4761-602f-414d-86b7-5417cd60ec35") : failed to sync configmap cache: timed out waiting for the condition May 13 23:53:31.552259 kubelet[2611]: E0513 23:53:31.552106 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.552259 kubelet[2611]: W0513 23:53:31.552127 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.552259 kubelet[2611]: E0513 23:53:31.552149 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.552665 kubelet[2611]: E0513 23:53:31.552497 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.552665 kubelet[2611]: W0513 23:53:31.552519 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.552665 kubelet[2611]: E0513 23:53:31.552532 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.552842 kubelet[2611]: E0513 23:53:31.552831 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.552894 kubelet[2611]: W0513 23:53:31.552885 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.552949 kubelet[2611]: E0513 23:53:31.552940 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.653760 kubelet[2611]: E0513 23:53:31.653709 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.653760 kubelet[2611]: W0513 23:53:31.653749 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.653940 kubelet[2611]: E0513 23:53:31.653772 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.654019 kubelet[2611]: E0513 23:53:31.654006 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.654019 kubelet[2611]: W0513 23:53:31.654017 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.654137 kubelet[2611]: E0513 23:53:31.654030 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.654231 kubelet[2611]: E0513 23:53:31.654215 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.654231 kubelet[2611]: W0513 23:53:31.654226 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.654293 kubelet[2611]: E0513 23:53:31.654235 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.755768 kubelet[2611]: E0513 23:53:31.755653 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.756122 kubelet[2611]: W0513 23:53:31.755897 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.756122 kubelet[2611]: E0513 23:53:31.755926 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.756376 kubelet[2611]: E0513 23:53:31.756245 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.756376 kubelet[2611]: W0513 23:53:31.756262 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.756376 kubelet[2611]: E0513 23:53:31.756273 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.756668 kubelet[2611]: E0513 23:53:31.756589 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.756668 kubelet[2611]: W0513 23:53:31.756601 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.756668 kubelet[2611]: E0513 23:53:31.756615 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.857846 kubelet[2611]: E0513 23:53:31.857635 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.857846 kubelet[2611]: W0513 23:53:31.857657 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.857846 kubelet[2611]: E0513 23:53:31.857677 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.858102 kubelet[2611]: E0513 23:53:31.858088 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.858217 kubelet[2611]: W0513 23:53:31.858150 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.858217 kubelet[2611]: E0513 23:53:31.858165 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.858566 kubelet[2611]: E0513 23:53:31.858507 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.858566 kubelet[2611]: W0513 23:53:31.858519 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.858566 kubelet[2611]: E0513 23:53:31.858531 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.959043 kubelet[2611]: E0513 23:53:31.959020 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.959336 kubelet[2611]: W0513 23:53:31.959184 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.959336 kubelet[2611]: E0513 23:53:31.959209 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.959514 kubelet[2611]: E0513 23:53:31.959504 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.959640 kubelet[2611]: W0513 23:53:31.959557 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.959640 kubelet[2611]: E0513 23:53:31.959571 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.959899 kubelet[2611]: E0513 23:53:31.959888 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.960066 kubelet[2611]: W0513 23:53:31.959969 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.960066 kubelet[2611]: E0513 23:53:31.959991 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.960248 kubelet[2611]: E0513 23:53:31.960221 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.960248 kubelet[2611]: W0513 23:53:31.960243 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.960325 kubelet[2611]: E0513 23:53:31.960264 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.960441 kubelet[2611]: E0513 23:53:31.960431 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.960475 kubelet[2611]: W0513 23:53:31.960443 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.960475 kubelet[2611]: E0513 23:53:31.960461 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.960619 kubelet[2611]: E0513 23:53:31.960609 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.960619 kubelet[2611]: W0513 23:53:31.960618 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.960682 kubelet[2611]: E0513 23:53:31.960630 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.960831 kubelet[2611]: E0513 23:53:31.960821 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.960831 kubelet[2611]: W0513 23:53:31.960831 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.960902 kubelet[2611]: E0513 23:53:31.960845 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.961247 kubelet[2611]: E0513 23:53:31.961134 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.961247 kubelet[2611]: W0513 23:53:31.961163 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.961247 kubelet[2611]: E0513 23:53:31.961176 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.961460 kubelet[2611]: E0513 23:53:31.961450 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.961587 kubelet[2611]: W0513 23:53:31.961509 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.961587 kubelet[2611]: E0513 23:53:31.961536 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.961969 kubelet[2611]: E0513 23:53:31.961837 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.961969 kubelet[2611]: W0513 23:53:31.961848 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.961969 kubelet[2611]: E0513 23:53:31.961863 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.962161 kubelet[2611]: E0513 23:53:31.962150 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.962337 kubelet[2611]: W0513 23:53:31.962205 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.962337 kubelet[2611]: E0513 23:53:31.962227 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.962511 kubelet[2611]: E0513 23:53:31.962493 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.962511 kubelet[2611]: W0513 23:53:31.962508 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.962573 kubelet[2611]: E0513 23:53:31.962520 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:31.967913 kubelet[2611]: E0513 23:53:31.967892 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:31.967913 kubelet[2611]: W0513 23:53:31.967908 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:31.968017 kubelet[2611]: E0513 23:53:31.967925 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.061028 kubelet[2611]: E0513 23:53:32.060997 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.061028 kubelet[2611]: W0513 23:53:32.061016 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.061028 kubelet[2611]: E0513 23:53:32.061036 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.061369 kubelet[2611]: E0513 23:53:32.061356 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.061404 kubelet[2611]: W0513 23:53:32.061369 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.061404 kubelet[2611]: E0513 23:53:32.061382 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.061558 kubelet[2611]: E0513 23:53:32.061548 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.061558 kubelet[2611]: W0513 23:53:32.061557 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.061619 kubelet[2611]: E0513 23:53:32.061566 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.061727 kubelet[2611]: E0513 23:53:32.061709 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.061770 kubelet[2611]: W0513 23:53:32.061739 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.061770 kubelet[2611]: E0513 23:53:32.061749 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.061940 kubelet[2611]: E0513 23:53:32.061930 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.061940 kubelet[2611]: W0513 23:53:32.061939 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.061995 kubelet[2611]: E0513 23:53:32.061948 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.062773 kubelet[2611]: E0513 23:53:32.062747 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:32.062773 kubelet[2611]: W0513 23:53:32.062761 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:32.062885 kubelet[2611]: E0513 23:53:32.062774 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:32.086784 kubelet[2611]: E0513 23:53:32.085894 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:32.087083 containerd[1481]: time="2025-05-13T23:53:32.087035392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656bd56cbc-4zp45,Uid:af72bf5b-9236-44fb-bf62-a6b5eabc2092,Namespace:calico-system,Attempt:0,}" May 13 23:53:32.105144 containerd[1481]: time="2025-05-13T23:53:32.104868243Z" level=info msg="connecting to shim 5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a" address="unix:///run/containerd/s/d4ec9161a55d1b1c69a0667bd331bddf1710ebdc22685d0a2b6a438ae1d753c4" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:32.133831 systemd[1]: Started cri-containerd-5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a.scope - libcontainer container 5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a. May 13 23:53:32.188401 containerd[1481]: time="2025-05-13T23:53:32.188362831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656bd56cbc-4zp45,Uid:af72bf5b-9236-44fb-bf62-a6b5eabc2092,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a\"" May 13 23:53:32.189837 kubelet[2611]: E0513 23:53:32.189672 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:32.192477 containerd[1481]: time="2025-05-13T23:53:32.192398474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:53:32.200915 kubelet[2611]: E0513 23:53:32.200730 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:32.201561 containerd[1481]: time="2025-05-13T23:53:32.201266616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjl5s,Uid:4a2b4761-602f-414d-86b7-5417cd60ec35,Namespace:calico-system,Attempt:0,}" May 13 23:53:32.207067 sshd[2831]: PAM: Permission denied for root from 218.92.0.154 May 13 23:53:32.224235 containerd[1481]: time="2025-05-13T23:53:32.223900257Z" level=info msg="connecting to shim b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5" address="unix:///run/containerd/s/17fb698955cba68fecb2498067b077871e33457de703876483d94d413f76300f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:32.250160 systemd[1]: Started cri-containerd-b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5.scope - libcontainer container b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5. May 13 23:53:32.287999 containerd[1481]: time="2025-05-13T23:53:32.287397008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjl5s,Uid:4a2b4761-602f-414d-86b7-5417cd60ec35,Namespace:calico-system,Attempt:0,} returns sandbox id \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\"" May 13 23:53:32.288905 kubelet[2611]: E0513 23:53:32.288864 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:32.344378 sshd[2831]: Received disconnect from 218.92.0.154 port 45431:11: [preauth] May 13 23:53:32.344378 sshd[2831]: Disconnected from authenticating user root 218.92.0.154 port 45431 [preauth] May 13 23:53:32.347371 systemd[1]: sshd@7-137.184.15.248:22-218.92.0.154:45431.service: Deactivated successfully. May 13 23:53:32.668140 kubelet[2611]: E0513 23:53:32.667996 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:33.293660 update_engine[1465]: I20250513 23:53:33.292791 1465 update_attempter.cc:509] Updating boot flags... May 13 23:53:33.365832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3184) May 13 23:53:33.471899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3183) May 13 23:53:34.070780 containerd[1481]: time="2025-05-13T23:53:34.070444488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:34.071572 containerd[1481]: time="2025-05-13T23:53:34.071513095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 23:53:34.071914 containerd[1481]: time="2025-05-13T23:53:34.071878618Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:34.074396 containerd[1481]: time="2025-05-13T23:53:34.074351710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:34.074961 containerd[1481]: time="2025-05-13T23:53:34.074929584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.882499291s" May 13 23:53:34.074961 containerd[1481]: time="2025-05-13T23:53:34.074957578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 23:53:34.076389 containerd[1481]: time="2025-05-13T23:53:34.076358136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:53:34.094748 containerd[1481]: time="2025-05-13T23:53:34.094523666Z" level=info msg="CreateContainer within sandbox \"5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:53:34.099286 containerd[1481]: time="2025-05-13T23:53:34.098935345Z" level=info msg="Container ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:34.116671 containerd[1481]: time="2025-05-13T23:53:34.116614147Z" level=info msg="CreateContainer within sandbox \"5d2d42b3cce24251c35e0e36b162eca305a354f27643fb1591bbd383951be05a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2\"" May 13 23:53:34.117763 containerd[1481]: time="2025-05-13T23:53:34.117470226Z" level=info msg="StartContainer for \"ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2\"" May 13 23:53:34.118960 containerd[1481]: time="2025-05-13T23:53:34.118900386Z" level=info msg="connecting to shim ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2" address="unix:///run/containerd/s/d4ec9161a55d1b1c69a0667bd331bddf1710ebdc22685d0a2b6a438ae1d753c4" protocol=ttrpc version=3 May 13 23:53:34.149002 systemd[1]: Started cri-containerd-ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2.scope - libcontainer container ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2. May 13 23:53:34.217453 containerd[1481]: time="2025-05-13T23:53:34.217331086Z" level=info msg="StartContainer for \"ae10987cdca7cbf4a148e8ce1105153dfa0eabbbb689e0d055659e247251e4b2\" returns successfully" May 13 23:53:34.668491 kubelet[2611]: E0513 23:53:34.668143 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:34.725730 kubelet[2611]: E0513 23:53:34.725685 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:34.739908 kubelet[2611]: I0513 23:53:34.739683 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-656bd56cbc-4zp45" podStartSLOduration=2.855499373 podStartE2EDuration="4.739639942s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:32.191801826 +0000 UTC m=+13.645511492" lastFinishedPulling="2025-05-13 23:53:34.075942407 +0000 UTC m=+15.529652061" observedRunningTime="2025-05-13 23:53:34.73843723 +0000 UTC m=+16.192146905" watchObservedRunningTime="2025-05-13 23:53:34.739639942 +0000 UTC m=+16.193349617" May 13 23:53:34.742238 kubelet[2611]: E0513 23:53:34.742207 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.742238 kubelet[2611]: W0513 23:53:34.742229 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.742368 kubelet[2611]: E0513 23:53:34.742260 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.742687 kubelet[2611]: E0513 23:53:34.742673 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.742757 kubelet[2611]: W0513 23:53:34.742686 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.742757 kubelet[2611]: E0513 23:53:34.742751 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.743857 kubelet[2611]: E0513 23:53:34.743832 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.743857 kubelet[2611]: W0513 23:53:34.743847 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.743960 kubelet[2611]: E0513 23:53:34.743858 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.744551 kubelet[2611]: E0513 23:53:34.744520 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.744551 kubelet[2611]: W0513 23:53:34.744535 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.744674 kubelet[2611]: E0513 23:53:34.744547 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.745063 kubelet[2611]: E0513 23:53:34.745040 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.745063 kubelet[2611]: W0513 23:53:34.745054 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.745144 kubelet[2611]: E0513 23:53:34.745067 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.745334 kubelet[2611]: E0513 23:53:34.745316 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.745334 kubelet[2611]: W0513 23:53:34.745328 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.745396 kubelet[2611]: E0513 23:53:34.745338 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.745546 kubelet[2611]: E0513 23:53:34.745535 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.745582 kubelet[2611]: W0513 23:53:34.745545 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.745582 kubelet[2611]: E0513 23:53:34.745553 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.745768 kubelet[2611]: E0513 23:53:34.745757 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.745806 kubelet[2611]: W0513 23:53:34.745796 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.745846 kubelet[2611]: E0513 23:53:34.745810 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.746183 kubelet[2611]: E0513 23:53:34.746166 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.746353 kubelet[2611]: W0513 23:53:34.746257 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.746353 kubelet[2611]: E0513 23:53:34.746275 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.746476 kubelet[2611]: E0513 23:53:34.746466 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.746522 kubelet[2611]: W0513 23:53:34.746514 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.746577 kubelet[2611]: E0513 23:53:34.746568 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.746792 kubelet[2611]: E0513 23:53:34.746782 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.746860 kubelet[2611]: W0513 23:53:34.746851 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.746978 kubelet[2611]: E0513 23:53:34.746900 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.747197 kubelet[2611]: E0513 23:53:34.747077 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.747197 kubelet[2611]: W0513 23:53:34.747092 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.747197 kubelet[2611]: E0513 23:53:34.747105 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.747350 kubelet[2611]: E0513 23:53:34.747340 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.747402 kubelet[2611]: W0513 23:53:34.747394 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.747453 kubelet[2611]: E0513 23:53:34.747445 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.747688 kubelet[2611]: E0513 23:53:34.747673 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.747871 kubelet[2611]: W0513 23:53:34.747776 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.747871 kubelet[2611]: E0513 23:53:34.747793 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.748060 kubelet[2611]: E0513 23:53:34.747993 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.748060 kubelet[2611]: W0513 23:53:34.748003 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.748060 kubelet[2611]: E0513 23:53:34.748013 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.779507 kubelet[2611]: E0513 23:53:34.779476 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.779507 kubelet[2611]: W0513 23:53:34.779500 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.779823 kubelet[2611]: E0513 23:53:34.779523 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.779823 kubelet[2611]: E0513 23:53:34.779813 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.779823 kubelet[2611]: W0513 23:53:34.779822 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.779951 kubelet[2611]: E0513 23:53:34.779838 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.780066 kubelet[2611]: E0513 23:53:34.780055 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.780103 kubelet[2611]: W0513 23:53:34.780065 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.780103 kubelet[2611]: E0513 23:53:34.780080 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.780348 kubelet[2611]: E0513 23:53:34.780335 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.780348 kubelet[2611]: W0513 23:53:34.780346 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.780473 kubelet[2611]: E0513 23:53:34.780358 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.780556 kubelet[2611]: E0513 23:53:34.780545 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.780593 kubelet[2611]: W0513 23:53:34.780555 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.780593 kubelet[2611]: E0513 23:53:34.780570 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.780788 kubelet[2611]: E0513 23:53:34.780744 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.780788 kubelet[2611]: W0513 23:53:34.780754 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.780788 kubelet[2611]: E0513 23:53:34.780770 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.781006 kubelet[2611]: E0513 23:53:34.780990 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.781006 kubelet[2611]: W0513 23:53:34.781002 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.781581 kubelet[2611]: E0513 23:53:34.781061 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.781581 kubelet[2611]: E0513 23:53:34.781151 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.781581 kubelet[2611]: W0513 23:53:34.781158 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.781581 kubelet[2611]: E0513 23:53:34.781326 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.781581 kubelet[2611]: W0513 23:53:34.781333 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.781581 kubelet[2611]: E0513 23:53:34.781343 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.781581 kubelet[2611]: E0513 23:53:34.781359 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.782058 kubelet[2611]: E0513 23:53:34.782017 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.782249 kubelet[2611]: W0513 23:53:34.782121 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.782249 kubelet[2611]: E0513 23:53:34.782148 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.782497 kubelet[2611]: E0513 23:53:34.782484 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.782649 kubelet[2611]: W0513 23:53:34.782555 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.782649 kubelet[2611]: E0513 23:53:34.782576 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.782957 kubelet[2611]: E0513 23:53:34.782861 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.782957 kubelet[2611]: W0513 23:53:34.782872 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.782957 kubelet[2611]: E0513 23:53:34.782910 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.783263 kubelet[2611]: E0513 23:53:34.783156 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.783263 kubelet[2611]: W0513 23:53:34.783166 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.783263 kubelet[2611]: E0513 23:53:34.783181 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.783634 kubelet[2611]: E0513 23:53:34.783581 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.783634 kubelet[2611]: W0513 23:53:34.783594 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.783634 kubelet[2611]: E0513 23:53:34.783610 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.783895 kubelet[2611]: E0513 23:53:34.783869 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.783895 kubelet[2611]: W0513 23:53:34.783884 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.783966 kubelet[2611]: E0513 23:53:34.783902 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.784495 kubelet[2611]: E0513 23:53:34.784273 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.784495 kubelet[2611]: W0513 23:53:34.784286 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.784495 kubelet[2611]: E0513 23:53:34.784302 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.785311 kubelet[2611]: E0513 23:53:34.785147 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.785311 kubelet[2611]: W0513 23:53:34.785211 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.785311 kubelet[2611]: E0513 23:53:34.785230 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:34.785490 kubelet[2611]: E0513 23:53:34.785477 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:53:34.785490 kubelet[2611]: W0513 23:53:34.785489 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:53:34.785553 kubelet[2611]: E0513 23:53:34.785519 2611 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:53:35.387784 containerd[1481]: time="2025-05-13T23:53:35.387745444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:35.389802 containerd[1481]: time="2025-05-13T23:53:35.389744228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 23:53:35.390377 containerd[1481]: time="2025-05-13T23:53:35.390351457Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:35.392554 containerd[1481]: time="2025-05-13T23:53:35.392098967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:35.394744 containerd[1481]: time="2025-05-13T23:53:35.393940642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.317532506s" May 13 23:53:35.394744 containerd[1481]: time="2025-05-13T23:53:35.393985952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 23:53:35.401933 containerd[1481]: time="2025-05-13T23:53:35.401896006Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:53:35.409756 containerd[1481]: time="2025-05-13T23:53:35.408036898Z" level=info msg="Container dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:35.416029 containerd[1481]: time="2025-05-13T23:53:35.415939854Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\"" May 13 23:53:35.416736 containerd[1481]: time="2025-05-13T23:53:35.416688803Z" level=info msg="StartContainer for \"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\"" May 13 23:53:35.417991 containerd[1481]: time="2025-05-13T23:53:35.417966137Z" level=info msg="connecting to shim dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42" address="unix:///run/containerd/s/17fb698955cba68fecb2498067b077871e33457de703876483d94d413f76300f" protocol=ttrpc version=3 May 13 23:53:35.446968 systemd[1]: Started cri-containerd-dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42.scope - libcontainer container dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42. May 13 23:53:35.491272 containerd[1481]: time="2025-05-13T23:53:35.491198863Z" level=info msg="StartContainer for \"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\" returns successfully" May 13 23:53:35.500679 systemd[1]: cri-containerd-dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42.scope: Deactivated successfully. May 13 23:53:35.504743 containerd[1481]: time="2025-05-13T23:53:35.504492805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\" id:\"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\" pid:3283 exited_at:{seconds:1747180415 nanos:503238998}" May 13 23:53:35.506639 containerd[1481]: time="2025-05-13T23:53:35.506603918Z" level=info msg="received exit event container_id:\"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\" id:\"dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42\" pid:3283 exited_at:{seconds:1747180415 nanos:503238998}" May 13 23:53:35.530879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbccf9663a34804b8076381226ebd49514575ed688b3f29ae0513e37b390ba42-rootfs.mount: Deactivated successfully. May 13 23:53:35.730039 kubelet[2611]: E0513 23:53:35.729931 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:35.733146 kubelet[2611]: I0513 23:53:35.732126 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:35.733146 kubelet[2611]: E0513 23:53:35.732431 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:35.733747 containerd[1481]: time="2025-05-13T23:53:35.733698926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:53:36.667835 kubelet[2611]: E0513 23:53:36.667787 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:38.670172 kubelet[2611]: E0513 23:53:38.670122 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:38.893860 containerd[1481]: time="2025-05-13T23:53:38.893818802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:38.894925 containerd[1481]: time="2025-05-13T23:53:38.894843735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 23:53:38.895416 containerd[1481]: time="2025-05-13T23:53:38.895388814Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:38.920364 containerd[1481]: time="2025-05-13T23:53:38.920189601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:38.920908 containerd[1481]: time="2025-05-13T23:53:38.920772228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.187022129s" May 13 23:53:38.920908 containerd[1481]: time="2025-05-13T23:53:38.920807384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 23:53:38.924784 containerd[1481]: time="2025-05-13T23:53:38.922886544Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:53:38.932384 containerd[1481]: time="2025-05-13T23:53:38.932355448Z" level=info msg="Container db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:38.954908 containerd[1481]: time="2025-05-13T23:53:38.954878467Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\"" May 13 23:53:38.956404 containerd[1481]: time="2025-05-13T23:53:38.956379785Z" level=info msg="StartContainer for \"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\"" May 13 23:53:38.962062 containerd[1481]: time="2025-05-13T23:53:38.962028688Z" level=info msg="connecting to shim db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d" address="unix:///run/containerd/s/17fb698955cba68fecb2498067b077871e33457de703876483d94d413f76300f" protocol=ttrpc version=3 May 13 23:53:38.988865 systemd[1]: Started cri-containerd-db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d.scope - libcontainer container db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d. May 13 23:53:39.035873 containerd[1481]: time="2025-05-13T23:53:39.035840061Z" level=info msg="StartContainer for \"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\" returns successfully" May 13 23:53:39.581317 systemd[1]: cri-containerd-db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d.scope: Deactivated successfully. May 13 23:53:39.581565 systemd[1]: cri-containerd-db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d.scope: Consumed 460ms CPU time, 153.6M memory peak, 408K read from disk, 154M written to disk. May 13 23:53:39.588397 containerd[1481]: time="2025-05-13T23:53:39.585923918Z" level=info msg="received exit event container_id:\"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\" id:\"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\" pid:3338 exited_at:{seconds:1747180419 nanos:583624806}" May 13 23:53:39.615836 containerd[1481]: time="2025-05-13T23:53:39.615791980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\" id:\"db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d\" pid:3338 exited_at:{seconds:1747180419 nanos:583624806}" May 13 23:53:39.623673 kubelet[2611]: I0513 23:53:39.623057 2611 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:53:39.679149 systemd[1]: Created slice kubepods-burstable-podba90afc3_8e3b_41ca_9eb1_6ce910226ab4.slice - libcontainer container kubepods-burstable-podba90afc3_8e3b_41ca_9eb1_6ce910226ab4.slice. May 13 23:53:39.689251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db815250e32ae2d67702a4771fde1bd6dc829ba5c89290afb63e1252fd60fd0d-rootfs.mount: Deactivated successfully. May 13 23:53:39.699977 systemd[1]: Created slice kubepods-burstable-pod56f97471_bd9f_40e9_998f_a983d4a730bf.slice - libcontainer container kubepods-burstable-pod56f97471_bd9f_40e9_998f_a983d4a730bf.slice. May 13 23:53:39.710448 systemd[1]: Created slice kubepods-besteffort-pod19b8d6ab_9762_40fa_9455_339aa558cf60.slice - libcontainer container kubepods-besteffort-pod19b8d6ab_9762_40fa_9455_339aa558cf60.slice. May 13 23:53:39.719396 kubelet[2611]: I0513 23:53:39.719366 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19b8d6ab-9762-40fa-9455-339aa558cf60-tigera-ca-bundle\") pod \"calico-kube-controllers-5d9c767d7f-6ns8j\" (UID: \"19b8d6ab-9762-40fa-9455-339aa558cf60\") " pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" May 13 23:53:39.719769 kubelet[2611]: I0513 23:53:39.719401 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tvfz\" (UniqueName: \"kubernetes.io/projected/ba90afc3-8e3b-41ca-9eb1-6ce910226ab4-kube-api-access-2tvfz\") pod \"coredns-668d6bf9bc-nf6lk\" (UID: \"ba90afc3-8e3b-41ca-9eb1-6ce910226ab4\") " pod="kube-system/coredns-668d6bf9bc-nf6lk" May 13 23:53:39.719769 kubelet[2611]: I0513 23:53:39.719422 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba90afc3-8e3b-41ca-9eb1-6ce910226ab4-config-volume\") pod \"coredns-668d6bf9bc-nf6lk\" (UID: \"ba90afc3-8e3b-41ca-9eb1-6ce910226ab4\") " pod="kube-system/coredns-668d6bf9bc-nf6lk" May 13 23:53:39.719769 kubelet[2611]: I0513 23:53:39.719443 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56f97471-bd9f-40e9-998f-a983d4a730bf-config-volume\") pod \"coredns-668d6bf9bc-426gp\" (UID: \"56f97471-bd9f-40e9-998f-a983d4a730bf\") " pod="kube-system/coredns-668d6bf9bc-426gp" May 13 23:53:39.719769 kubelet[2611]: I0513 23:53:39.719465 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5d7\" (UniqueName: \"kubernetes.io/projected/56f97471-bd9f-40e9-998f-a983d4a730bf-kube-api-access-hl5d7\") pod \"coredns-668d6bf9bc-426gp\" (UID: \"56f97471-bd9f-40e9-998f-a983d4a730bf\") " pod="kube-system/coredns-668d6bf9bc-426gp" May 13 23:53:39.719769 kubelet[2611]: I0513 23:53:39.719484 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j75g\" (UniqueName: \"kubernetes.io/projected/19b8d6ab-9762-40fa-9455-339aa558cf60-kube-api-access-8j75g\") pod \"calico-kube-controllers-5d9c767d7f-6ns8j\" (UID: \"19b8d6ab-9762-40fa-9455-339aa558cf60\") " pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" May 13 23:53:39.720915 systemd[1]: Created slice kubepods-besteffort-podc3f79803_0dc4_48de_9458_440ace3b8ea8.slice - libcontainer container kubepods-besteffort-podc3f79803_0dc4_48de_9458_440ace3b8ea8.slice. May 13 23:53:39.728975 systemd[1]: Created slice kubepods-besteffort-pod3db3c3a6_be47_42f1_b5d6_89368742f455.slice - libcontainer container kubepods-besteffort-pod3db3c3a6_be47_42f1_b5d6_89368742f455.slice. May 13 23:53:39.748840 kubelet[2611]: E0513 23:53:39.748812 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:39.750562 containerd[1481]: time="2025-05-13T23:53:39.750439924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:53:39.820237 kubelet[2611]: I0513 23:53:39.820013 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3db3c3a6-be47-42f1-b5d6-89368742f455-calico-apiserver-certs\") pod \"calico-apiserver-66d6f449cf-92jkk\" (UID: \"3db3c3a6-be47-42f1-b5d6-89368742f455\") " pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" May 13 23:53:39.821091 kubelet[2611]: I0513 23:53:39.821069 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gckk\" (UniqueName: \"kubernetes.io/projected/c3f79803-0dc4-48de-9458-440ace3b8ea8-kube-api-access-9gckk\") pod \"calico-apiserver-66d6f449cf-7kxhq\" (UID: \"c3f79803-0dc4-48de-9458-440ace3b8ea8\") " pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" May 13 23:53:39.822739 kubelet[2611]: I0513 23:53:39.822161 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgfl\" (UniqueName: \"kubernetes.io/projected/3db3c3a6-be47-42f1-b5d6-89368742f455-kube-api-access-vjgfl\") pod \"calico-apiserver-66d6f449cf-92jkk\" (UID: \"3db3c3a6-be47-42f1-b5d6-89368742f455\") " pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" May 13 23:53:39.822739 kubelet[2611]: I0513 23:53:39.822197 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c3f79803-0dc4-48de-9458-440ace3b8ea8-calico-apiserver-certs\") pod \"calico-apiserver-66d6f449cf-7kxhq\" (UID: \"c3f79803-0dc4-48de-9458-440ace3b8ea8\") " pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" May 13 23:53:39.986519 kubelet[2611]: E0513 23:53:39.986402 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:39.988208 containerd[1481]: time="2025-05-13T23:53:39.988153350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nf6lk,Uid:ba90afc3-8e3b-41ca-9eb1-6ce910226ab4,Namespace:kube-system,Attempt:0,}" May 13 23:53:40.006379 kubelet[2611]: E0513 23:53:40.005045 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:40.007395 containerd[1481]: time="2025-05-13T23:53:40.006857756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-426gp,Uid:56f97471-bd9f-40e9-998f-a983d4a730bf,Namespace:kube-system,Attempt:0,}" May 13 23:53:40.019197 containerd[1481]: time="2025-05-13T23:53:40.019167049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9c767d7f-6ns8j,Uid:19b8d6ab-9762-40fa-9455-339aa558cf60,Namespace:calico-system,Attempt:0,}" May 13 23:53:40.028094 containerd[1481]: time="2025-05-13T23:53:40.028064879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-7kxhq,Uid:c3f79803-0dc4-48de-9458-440ace3b8ea8,Namespace:calico-apiserver,Attempt:0,}" May 13 23:53:40.049248 containerd[1481]: time="2025-05-13T23:53:40.049000192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-92jkk,Uid:3db3c3a6-be47-42f1-b5d6-89368742f455,Namespace:calico-apiserver,Attempt:0,}" May 13 23:53:40.213198 containerd[1481]: time="2025-05-13T23:53:40.213006070Z" level=error msg="Failed to destroy network for sandbox \"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.233257 containerd[1481]: time="2025-05-13T23:53:40.216871949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-7kxhq,Uid:c3f79803-0dc4-48de-9458-440ace3b8ea8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.233520 containerd[1481]: time="2025-05-13T23:53:40.229662212Z" level=error msg="Failed to destroy network for sandbox \"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.234175 containerd[1481]: time="2025-05-13T23:53:40.234136670Z" level=error msg="Failed to destroy network for sandbox \"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.235113 containerd[1481]: time="2025-05-13T23:53:40.234541484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nf6lk,Uid:ba90afc3-8e3b-41ca-9eb1-6ce910226ab4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.241833 containerd[1481]: time="2025-05-13T23:53:40.240663514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-426gp,Uid:56f97471-bd9f-40e9-998f-a983d4a730bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.241833 containerd[1481]: time="2025-05-13T23:53:40.240902211Z" level=error msg="Failed to destroy network for sandbox \"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.241963 kubelet[2611]: E0513 23:53:40.241357 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.241963 kubelet[2611]: E0513 23:53:40.241405 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.241963 kubelet[2611]: E0513 23:53:40.241441 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-426gp" May 13 23:53:40.241963 kubelet[2611]: E0513 23:53:40.241464 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-426gp" May 13 23:53:40.242763 kubelet[2611]: E0513 23:53:40.241367 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.242763 kubelet[2611]: E0513 23:53:40.241483 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" May 13 23:53:40.242763 kubelet[2611]: E0513 23:53:40.241500 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" May 13 23:53:40.242857 kubelet[2611]: E0513 23:53:40.241513 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-426gp_kube-system(56f97471-bd9f-40e9-998f-a983d4a730bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-426gp_kube-system(56f97471-bd9f-40e9-998f-a983d4a730bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1af206b830121128534be08fce5ea3f75b964bd59f843d8400403faad61314f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-426gp" podUID="56f97471-bd9f-40e9-998f-a983d4a730bf" May 13 23:53:40.242857 kubelet[2611]: E0513 23:53:40.241547 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d6f449cf-7kxhq_calico-apiserver(c3f79803-0dc4-48de-9458-440ace3b8ea8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d6f449cf-7kxhq_calico-apiserver(c3f79803-0dc4-48de-9458-440ace3b8ea8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dea7d52fbf0203bc8638fb7e0ebd3567e3895af921730f1a12d56940a2acb9e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" podUID="c3f79803-0dc4-48de-9458-440ace3b8ea8" May 13 23:53:40.242857 kubelet[2611]: E0513 23:53:40.241463 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nf6lk" May 13 23:53:40.242980 kubelet[2611]: E0513 23:53:40.241577 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nf6lk" May 13 23:53:40.242980 kubelet[2611]: E0513 23:53:40.241608 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nf6lk_kube-system(ba90afc3-8e3b-41ca-9eb1-6ce910226ab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nf6lk_kube-system(ba90afc3-8e3b-41ca-9eb1-6ce910226ab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46f0d1f82182f691552bfb47c3772191f628b76242b215bd4b23b32d8736acc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nf6lk" podUID="ba90afc3-8e3b-41ca-9eb1-6ce910226ab4" May 13 23:53:40.243634 containerd[1481]: time="2025-05-13T23:53:40.242588368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9c767d7f-6ns8j,Uid:19b8d6ab-9762-40fa-9455-339aa558cf60,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.243634 containerd[1481]: time="2025-05-13T23:53:40.242644392Z" level=error msg="Failed to destroy network for sandbox \"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.243784 kubelet[2611]: E0513 23:53:40.243254 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.243784 kubelet[2611]: E0513 23:53:40.243496 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" May 13 23:53:40.243784 kubelet[2611]: E0513 23:53:40.243536 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" May 13 23:53:40.244115 kubelet[2611]: E0513 23:53:40.243938 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d9c767d7f-6ns8j_calico-system(19b8d6ab-9762-40fa-9455-339aa558cf60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d9c767d7f-6ns8j_calico-system(19b8d6ab-9762-40fa-9455-339aa558cf60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aee45a1a57630313986bf960e77a6fd8af77591f9a0c55f7c0fd7b29e9fdbf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" podUID="19b8d6ab-9762-40fa-9455-339aa558cf60" May 13 23:53:40.244545 kubelet[2611]: E0513 23:53:40.244326 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.244545 kubelet[2611]: E0513 23:53:40.244356 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" May 13 23:53:40.244545 kubelet[2611]: E0513 23:53:40.244377 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" May 13 23:53:40.244684 containerd[1481]: time="2025-05-13T23:53:40.244198001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-92jkk,Uid:3db3c3a6-be47-42f1-b5d6-89368742f455,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.244748 kubelet[2611]: E0513 23:53:40.244410 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d6f449cf-92jkk_calico-apiserver(3db3c3a6-be47-42f1-b5d6-89368742f455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d6f449cf-92jkk_calico-apiserver(3db3c3a6-be47-42f1-b5d6-89368742f455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a370a6a0de935583ac1b54815fbddc1d30eb584ad6f92b4cfbd0c7ac378b37c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" podUID="3db3c3a6-be47-42f1-b5d6-89368742f455" May 13 23:53:40.675605 systemd[1]: Created slice kubepods-besteffort-podeeda7aec_b8fa_4b85_baa0_a818865b60ee.slice - libcontainer container kubepods-besteffort-podeeda7aec_b8fa_4b85_baa0_a818865b60ee.slice. May 13 23:53:40.679768 containerd[1481]: time="2025-05-13T23:53:40.679608816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txj64,Uid:eeda7aec-b8fa-4b85-baa0-a818865b60ee,Namespace:calico-system,Attempt:0,}" May 13 23:53:40.738390 containerd[1481]: time="2025-05-13T23:53:40.738318138Z" level=error msg="Failed to destroy network for sandbox \"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.739238 containerd[1481]: time="2025-05-13T23:53:40.739193685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txj64,Uid:eeda7aec-b8fa-4b85-baa0-a818865b60ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.740010 kubelet[2611]: E0513 23:53:40.739476 2611 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:53:40.740010 kubelet[2611]: E0513 23:53:40.739564 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-txj64" May 13 23:53:40.740010 kubelet[2611]: E0513 23:53:40.739596 2611 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-txj64" May 13 23:53:40.741291 kubelet[2611]: E0513 23:53:40.739662 2611 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-txj64_calico-system(eeda7aec-b8fa-4b85-baa0-a818865b60ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-txj64_calico-system(eeda7aec-b8fa-4b85-baa0-a818865b60ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9102347d8da7793328a514fd644b2d6f1f5e2230aca5c6c370bbdbb55601377a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-txj64" podUID="eeda7aec-b8fa-4b85-baa0-a818865b60ee" May 13 23:53:40.936080 systemd[1]: run-netns-cni\x2d845d3cea\x2d7b5c\x2dd77d\x2d34b9\x2deb968ce07558.mount: Deactivated successfully. May 13 23:53:40.936266 systemd[1]: run-netns-cni\x2ded6df5f1\x2d6d00\x2d9471\x2dd904\x2d9f945cb5798f.mount: Deactivated successfully. May 13 23:53:44.120229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302460425.mount: Deactivated successfully. May 13 23:53:44.160543 containerd[1481]: time="2025-05-13T23:53:44.156370649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:44.161568 containerd[1481]: time="2025-05-13T23:53:44.157785559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 23:53:44.162417 containerd[1481]: time="2025-05-13T23:53:44.162362092Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:44.163704 containerd[1481]: time="2025-05-13T23:53:44.162914309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.4124367s" May 13 23:53:44.163704 containerd[1481]: time="2025-05-13T23:53:44.162947164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 23:53:44.163704 containerd[1481]: time="2025-05-13T23:53:44.163299813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:44.191754 containerd[1481]: time="2025-05-13T23:53:44.191698058Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:53:44.201474 containerd[1481]: time="2025-05-13T23:53:44.201441381Z" level=info msg="Container 4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:44.212927 containerd[1481]: time="2025-05-13T23:53:44.212849401Z" level=info msg="CreateContainer within sandbox \"b259ac46bb2ea574321be3eb8930f4ee5af41bf4e945e708a196f014c25b98b5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\"" May 13 23:53:44.213818 containerd[1481]: time="2025-05-13T23:53:44.213792748Z" level=info msg="StartContainer for \"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\"" May 13 23:53:44.215111 containerd[1481]: time="2025-05-13T23:53:44.215085137Z" level=info msg="connecting to shim 4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951" address="unix:///run/containerd/s/17fb698955cba68fecb2498067b077871e33457de703876483d94d413f76300f" protocol=ttrpc version=3 May 13 23:53:44.296402 systemd[1]: Started cri-containerd-4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951.scope - libcontainer container 4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951. May 13 23:53:44.353882 containerd[1481]: time="2025-05-13T23:53:44.352982301Z" level=info msg="StartContainer for \"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\" returns successfully" May 13 23:53:44.427741 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:53:44.428653 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:53:44.766039 kubelet[2611]: E0513 23:53:44.764954 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:44.805150 kubelet[2611]: I0513 23:53:44.805081 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rjl5s" podStartSLOduration=2.925862918 podStartE2EDuration="14.800642906s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:32.289324314 +0000 UTC m=+13.743033980" lastFinishedPulling="2025-05-13 23:53:44.164104311 +0000 UTC m=+25.617813968" observedRunningTime="2025-05-13 23:53:44.794583466 +0000 UTC m=+26.248293140" watchObservedRunningTime="2025-05-13 23:53:44.800642906 +0000 UTC m=+26.254352616" May 13 23:53:45.768362 kubelet[2611]: I0513 23:53:45.768322 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:45.769883 kubelet[2611]: E0513 23:53:45.769536 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:51.667977 containerd[1481]: time="2025-05-13T23:53:51.667880881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-92jkk,Uid:3db3c3a6-be47-42f1-b5d6-89368742f455,Namespace:calico-apiserver,Attempt:0,}" May 13 23:53:51.889899 systemd-networkd[1375]: cali632eadfe6f5: Link UP May 13 23:53:51.890076 systemd-networkd[1375]: cali632eadfe6f5: Gained carrier May 13 23:53:51.911775 containerd[1481]: 2025-05-13 23:53:51.698 [INFO][3826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:53:51.911775 containerd[1481]: 2025-05-13 23:53:51.721 [INFO][3826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0 calico-apiserver-66d6f449cf- calico-apiserver 3db3c3a6-be47-42f1-b5d6-89368742f455 691 0 2025-05-13 23:53:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d6f449cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 calico-apiserver-66d6f449cf-92jkk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali632eadfe6f5 [] []}} ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-" May 13 23:53:51.911775 containerd[1481]: 2025-05-13 23:53:51.722 [INFO][3826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.911775 containerd[1481]: 2025-05-13 23:53:51.831 [INFO][3837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" HandleID="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.846 [INFO][3837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" HandleID="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"calico-apiserver-66d6f449cf-92jkk", "timestamp":"2025-05-13 23:53:51.83143392 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.846 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.847 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.847 [INFO][3837] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.850 [INFO][3837] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.856 [INFO][3837] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.860 [INFO][3837] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.862 [INFO][3837] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912034 containerd[1481]: 2025-05-13 23:53:51.863 [INFO][3837] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.863 [INFO][3837] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.865 [INFO][3837] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6 May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.868 [INFO][3837] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.874 [INFO][3837] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.193/26] block=192.168.28.192/26 handle="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.875 [INFO][3837] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.193/26] handle="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.875 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:51.912269 containerd[1481]: 2025-05-13 23:53:51.875 [INFO][3837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.193/26] IPv6=[] ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" HandleID="k8s-pod-network.67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.912588 containerd[1481]: 2025-05-13 23:53:51.877 [INFO][3826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0", GenerateName:"calico-apiserver-66d6f449cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3db3c3a6-be47-42f1-b5d6-89368742f455", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d6f449cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"calico-apiserver-66d6f449cf-92jkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali632eadfe6f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:51.912653 containerd[1481]: 2025-05-13 23:53:51.877 [INFO][3826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.193/32] ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.912653 containerd[1481]: 2025-05-13 23:53:51.877 [INFO][3826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali632eadfe6f5 ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.912653 containerd[1481]: 2025-05-13 23:53:51.884 [INFO][3826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.913187 containerd[1481]: 2025-05-13 23:53:51.885 [INFO][3826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0", GenerateName:"calico-apiserver-66d6f449cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3db3c3a6-be47-42f1-b5d6-89368742f455", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d6f449cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6", Pod:"calico-apiserver-66d6f449cf-92jkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali632eadfe6f5", MAC:"62:84:e3:9e:2b:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:51.913294 containerd[1481]: 2025-05-13 23:53:51.907 [INFO][3826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-92jkk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--92jkk-eth0" May 13 23:53:51.954817 containerd[1481]: time="2025-05-13T23:53:51.954312366Z" level=info msg="connecting to shim 67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6" address="unix:///run/containerd/s/ce030df4bc3893dd3e83e7244a64b63f76985e2cad20415073e79353a61dbd70" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:51.985511 systemd[1]: Started cri-containerd-67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6.scope - libcontainer container 67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6. May 13 23:53:52.036046 containerd[1481]: time="2025-05-13T23:53:52.035979930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-92jkk,Uid:3db3c3a6-be47-42f1-b5d6-89368742f455,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6\"" May 13 23:53:52.046595 containerd[1481]: time="2025-05-13T23:53:52.046383016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:53:52.199467 kubelet[2611]: I0513 23:53:52.199083 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:52.201674 kubelet[2611]: E0513 23:53:52.200525 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:52.669056 containerd[1481]: time="2025-05-13T23:53:52.668898685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-7kxhq,Uid:c3f79803-0dc4-48de-9458-440ace3b8ea8,Namespace:calico-apiserver,Attempt:0,}" May 13 23:53:52.782762 kubelet[2611]: E0513 23:53:52.782399 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:52.792126 systemd-networkd[1375]: calicb8a68e133b: Link UP May 13 23:53:52.792333 systemd-networkd[1375]: calicb8a68e133b: Gained carrier May 13 23:53:52.808319 containerd[1481]: 2025-05-13 23:53:52.702 [INFO][3946] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:53:52.808319 containerd[1481]: 2025-05-13 23:53:52.716 [INFO][3946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0 calico-apiserver-66d6f449cf- calico-apiserver c3f79803-0dc4-48de-9458-440ace3b8ea8 695 0 2025-05-13 23:53:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d6f449cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 calico-apiserver-66d6f449cf-7kxhq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb8a68e133b [] []}} ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-" May 13 23:53:52.808319 containerd[1481]: 2025-05-13 23:53:52.716 [INFO][3946] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.808319 containerd[1481]: 2025-05-13 23:53:52.748 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" HandleID="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.758 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" HandleID="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"calico-apiserver-66d6f449cf-7kxhq", "timestamp":"2025-05-13 23:53:52.748852424 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.758 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.758 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.758 [INFO][3958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.760 [INFO][3958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.763 [INFO][3958] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.767 [INFO][3958] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.769 [INFO][3958] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.808772 containerd[1481]: 2025-05-13 23:53:52.772 [INFO][3958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.772 [INFO][3958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.773 [INFO][3958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9 May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.778 [INFO][3958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.786 [INFO][3958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.194/26] block=192.168.28.192/26 handle="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.786 [INFO][3958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.194/26] handle="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.787 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:52.809269 containerd[1481]: 2025-05-13 23:53:52.787 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.194/26] IPv6=[] ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" HandleID="k8s-pod-network.ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.809497 containerd[1481]: 2025-05-13 23:53:52.789 [INFO][3946] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0", GenerateName:"calico-apiserver-66d6f449cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3f79803-0dc4-48de-9458-440ace3b8ea8", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d6f449cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"calico-apiserver-66d6f449cf-7kxhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb8a68e133b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:52.809580 containerd[1481]: 2025-05-13 23:53:52.789 [INFO][3946] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.194/32] ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.809580 containerd[1481]: 2025-05-13 23:53:52.789 [INFO][3946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb8a68e133b ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.809580 containerd[1481]: 2025-05-13 23:53:52.791 [INFO][3946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.809787 containerd[1481]: 2025-05-13 23:53:52.791 [INFO][3946] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0", GenerateName:"calico-apiserver-66d6f449cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3f79803-0dc4-48de-9458-440ace3b8ea8", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d6f449cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9", Pod:"calico-apiserver-66d6f449cf-7kxhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb8a68e133b", MAC:"da:04:cc:0e:f9:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:52.809852 containerd[1481]: 2025-05-13 23:53:52.805 [INFO][3946] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" Namespace="calico-apiserver" Pod="calico-apiserver-66d6f449cf-7kxhq" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--apiserver--66d6f449cf--7kxhq-eth0" May 13 23:53:52.831383 containerd[1481]: time="2025-05-13T23:53:52.831202605Z" level=info msg="connecting to shim ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9" address="unix:///run/containerd/s/9197215a5e877264ddd455dbe910a36371378b5e7ae777edf6890581cc23014c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:52.860898 systemd[1]: Started cri-containerd-ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9.scope - libcontainer container ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9. May 13 23:53:52.907707 containerd[1481]: time="2025-05-13T23:53:52.907672186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d6f449cf-7kxhq,Uid:c3f79803-0dc4-48de-9458-440ace3b8ea8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9\"" May 13 23:53:53.201298 kernel: bpftool[4037]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:53:53.553294 systemd-networkd[1375]: cali632eadfe6f5: Gained IPv6LL May 13 23:53:53.563565 systemd-networkd[1375]: vxlan.calico: Link UP May 13 23:53:53.563573 systemd-networkd[1375]: vxlan.calico: Gained carrier May 13 23:53:54.371418 containerd[1481]: time="2025-05-13T23:53:54.371375824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:54.372426 containerd[1481]: time="2025-05-13T23:53:54.372368414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 23:53:54.372999 containerd[1481]: time="2025-05-13T23:53:54.372973915Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:54.374519 containerd[1481]: time="2025-05-13T23:53:54.374490226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:54.375778 containerd[1481]: time="2025-05-13T23:53:54.375739100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.329297707s" May 13 23:53:54.375830 containerd[1481]: time="2025-05-13T23:53:54.375782340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 23:53:54.379648 containerd[1481]: time="2025-05-13T23:53:54.378547953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:53:54.384748 containerd[1481]: time="2025-05-13T23:53:54.384684977Z" level=info msg="CreateContainer within sandbox \"67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:53:54.391615 containerd[1481]: time="2025-05-13T23:53:54.390944789Z" level=info msg="Container b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:54.395332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191529861.mount: Deactivated successfully. May 13 23:53:54.398917 containerd[1481]: time="2025-05-13T23:53:54.398887991Z" level=info msg="CreateContainer within sandbox \"67e9932ba8e5fdf023bb42fb451d28f471c351075cdba5d128cf2c9ac131d7b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49\"" May 13 23:53:54.401359 containerd[1481]: time="2025-05-13T23:53:54.401330568Z" level=info msg="StartContainer for \"b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49\"" May 13 23:53:54.402301 containerd[1481]: time="2025-05-13T23:53:54.402275605Z" level=info msg="connecting to shim b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49" address="unix:///run/containerd/s/ce030df4bc3893dd3e83e7244a64b63f76985e2cad20415073e79353a61dbd70" protocol=ttrpc version=3 May 13 23:53:54.428891 systemd[1]: Started cri-containerd-b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49.scope - libcontainer container b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49. May 13 23:53:54.479128 containerd[1481]: time="2025-05-13T23:53:54.478985969Z" level=info msg="StartContainer for \"b85d7a5eff714dcc0c173cdec01136fcaab776ecf9b3ecd740b5a43d14067c49\" returns successfully" May 13 23:53:54.511982 systemd-networkd[1375]: calicb8a68e133b: Gained IPv6LL May 13 23:53:54.669806 containerd[1481]: time="2025-05-13T23:53:54.668853831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txj64,Uid:eeda7aec-b8fa-4b85-baa0-a818865b60ee,Namespace:calico-system,Attempt:0,}" May 13 23:53:54.669972 kubelet[2611]: E0513 23:53:54.669282 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:54.670662 containerd[1481]: time="2025-05-13T23:53:54.670440090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9c767d7f-6ns8j,Uid:19b8d6ab-9762-40fa-9455-339aa558cf60,Namespace:calico-system,Attempt:0,}" May 13 23:53:54.671224 containerd[1481]: time="2025-05-13T23:53:54.670990557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-426gp,Uid:56f97471-bd9f-40e9-998f-a983d4a730bf,Namespace:kube-system,Attempt:0,}" May 13 23:53:54.880255 containerd[1481]: time="2025-05-13T23:53:54.880067747Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:54.882020 containerd[1481]: time="2025-05-13T23:53:54.881630967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 23:53:54.885634 containerd[1481]: time="2025-05-13T23:53:54.885591945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 507.016592ms" May 13 23:53:54.886381 containerd[1481]: time="2025-05-13T23:53:54.885779980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 23:53:54.896938 containerd[1481]: time="2025-05-13T23:53:54.896535965Z" level=info msg="CreateContainer within sandbox \"ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:53:54.935242 containerd[1481]: time="2025-05-13T23:53:54.935125137Z" level=info msg="Container 169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:54.954709 containerd[1481]: time="2025-05-13T23:53:54.954661920Z" level=info msg="CreateContainer within sandbox \"ee15c272603a7f33992bdc4b9e9cb29efdaf026c961c18f9afbaf66bb07d3ff9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12\"" May 13 23:53:54.958523 containerd[1481]: time="2025-05-13T23:53:54.958486131Z" level=info msg="StartContainer for \"169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12\"" May 13 23:53:54.965074 containerd[1481]: time="2025-05-13T23:53:54.965039388Z" level=info msg="connecting to shim 169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12" address="unix:///run/containerd/s/9197215a5e877264ddd455dbe910a36371378b5e7ae777edf6890581cc23014c" protocol=ttrpc version=3 May 13 23:53:55.001954 systemd[1]: Started cri-containerd-169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12.scope - libcontainer container 169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12. May 13 23:53:55.023053 systemd-networkd[1375]: cali717146a607c: Link UP May 13 23:53:55.025186 systemd-networkd[1375]: cali717146a607c: Gained carrier May 13 23:53:55.055999 kubelet[2611]: I0513 23:53:55.053512 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d6f449cf-92jkk" podStartSLOduration=22.707864829000002 podStartE2EDuration="25.048858173s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:52.03740494 +0000 UTC m=+33.491114593" lastFinishedPulling="2025-05-13 23:53:54.378398271 +0000 UTC m=+35.832107937" observedRunningTime="2025-05-13 23:53:54.868062511 +0000 UTC m=+36.321772186" watchObservedRunningTime="2025-05-13 23:53:55.048858173 +0000 UTC m=+36.502567847" May 13 23:53:55.058198 containerd[1481]: 2025-05-13 23:53:54.805 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0 csi-node-driver- calico-system eeda7aec-b8fa-4b85-baa0-a818865b60ee 603 0 2025-05-13 23:53:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 csi-node-driver-txj64 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali717146a607c [] []}} ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-" May 13 23:53:55.058198 containerd[1481]: 2025-05-13 23:53:54.805 [INFO][4163] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.058198 containerd[1481]: 2025-05-13 23:53:54.908 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" HandleID="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Workload="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.937 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" HandleID="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Workload="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000515a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"csi-node-driver-txj64", "timestamp":"2025-05-13 23:53:54.908916116 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.937 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.937 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.937 [INFO][4200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.941 [INFO][4200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.950 [INFO][4200] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.959 [INFO][4200] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.965 [INFO][4200] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.058405 containerd[1481]: 2025-05-13 23:53:54.972 [INFO][4200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.972 [INFO][4200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.975 [INFO][4200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3 May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.981 [INFO][4200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.994 [INFO][4200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.195/26] block=192.168.28.192/26 handle="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.994 [INFO][4200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.195/26] handle="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.994 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:55.059127 containerd[1481]: 2025-05-13 23:53:54.995 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.195/26] IPv6=[] ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" HandleID="k8s-pod-network.a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Workload="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.059306 containerd[1481]: 2025-05-13 23:53:55.004 [INFO][4163] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeda7aec-b8fa-4b85-baa0-a818865b60ee", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"csi-node-driver-txj64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717146a607c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.059375 containerd[1481]: 2025-05-13 23:53:55.005 [INFO][4163] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.195/32] ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.059375 containerd[1481]: 2025-05-13 23:53:55.005 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali717146a607c ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.059375 containerd[1481]: 2025-05-13 23:53:55.024 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.059486 containerd[1481]: 2025-05-13 23:53:55.026 [INFO][4163] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeda7aec-b8fa-4b85-baa0-a818865b60ee", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3", Pod:"csi-node-driver-txj64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717146a607c", MAC:"06:f1:b2:08:af:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.059549 containerd[1481]: 2025-05-13 23:53:55.051 [INFO][4163] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" Namespace="calico-system" Pod="csi-node-driver-txj64" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-csi--node--driver--txj64-eth0" May 13 23:53:55.094384 containerd[1481]: time="2025-05-13T23:53:55.093675553Z" level=info msg="connecting to shim a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3" address="unix:///run/containerd/s/5746a712fba51ce6fa228e515b191a078a9a0f4163765a7cf7866545e10a3e87" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:55.128315 systemd[1]: Started cri-containerd-a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3.scope - libcontainer container a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3. May 13 23:53:55.151947 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL May 13 23:53:55.180251 containerd[1481]: time="2025-05-13T23:53:55.180118493Z" level=info msg="StartContainer for \"169dd7972296899a044ff274db573921b94f1e70f290498ff155104296310e12\" returns successfully" May 13 23:53:55.209540 systemd-networkd[1375]: califcd23a0e5c3: Link UP May 13 23:53:55.214894 systemd-networkd[1375]: califcd23a0e5c3: Gained carrier May 13 23:53:55.252683 containerd[1481]: 2025-05-13 23:53:54.800 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0 calico-kube-controllers-5d9c767d7f- calico-system 19b8d6ab-9762-40fa-9455-339aa558cf60 690 0 2025-05-13 23:53:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d9c767d7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 calico-kube-controllers-5d9c767d7f-6ns8j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califcd23a0e5c3 [] []}} ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-" May 13 23:53:55.252683 containerd[1481]: 2025-05-13 23:53:54.800 [INFO][4162] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.252683 containerd[1481]: 2025-05-13 23:53:54.917 [INFO][4194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" HandleID="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:54.942 [INFO][4194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" HandleID="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"calico-kube-controllers-5d9c767d7f-6ns8j", "timestamp":"2025-05-13 23:53:54.917671007 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:54.943 [INFO][4194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:54.994 [INFO][4194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:54.994 [INFO][4194] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:55.043 [INFO][4194] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:55.139 [INFO][4194] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:55.153 [INFO][4194] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:55.160 [INFO][4194] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.253294 containerd[1481]: 2025-05-13 23:53:55.165 [INFO][4194] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.165 [INFO][4194] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.169 [INFO][4194] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37 May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.175 [INFO][4194] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.182 [INFO][4194] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.196/26] block=192.168.28.192/26 handle="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.182 [INFO][4194] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.196/26] handle="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.182 [INFO][4194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:55.255796 containerd[1481]: 2025-05-13 23:53:55.183 [INFO][4194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.196/26] IPv6=[] ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" HandleID="k8s-pod-network.161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Workload="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.255978 containerd[1481]: 2025-05-13 23:53:55.190 [INFO][4162] cni-plugin/k8s.go 386: Populated endpoint ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0", GenerateName:"calico-kube-controllers-5d9c767d7f-", Namespace:"calico-system", SelfLink:"", UID:"19b8d6ab-9762-40fa-9455-339aa558cf60", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9c767d7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"calico-kube-controllers-5d9c767d7f-6ns8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califcd23a0e5c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.256048 containerd[1481]: 2025-05-13 23:53:55.193 [INFO][4162] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.196/32] ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.256048 containerd[1481]: 2025-05-13 23:53:55.197 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcd23a0e5c3 ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.256048 containerd[1481]: 2025-05-13 23:53:55.214 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.256177 containerd[1481]: 2025-05-13 23:53:55.215 [INFO][4162] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0", GenerateName:"calico-kube-controllers-5d9c767d7f-", Namespace:"calico-system", SelfLink:"", UID:"19b8d6ab-9762-40fa-9455-339aa558cf60", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9c767d7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37", Pod:"calico-kube-controllers-5d9c767d7f-6ns8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califcd23a0e5c3", MAC:"9a:2f:b9:60:10:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.256249 containerd[1481]: 2025-05-13 23:53:55.238 [INFO][4162] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" Namespace="calico-system" Pod="calico-kube-controllers-5d9c767d7f-6ns8j" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-calico--kube--controllers--5d9c767d7f--6ns8j-eth0" May 13 23:53:55.289227 containerd[1481]: time="2025-05-13T23:53:55.288086306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txj64,Uid:eeda7aec-b8fa-4b85-baa0-a818865b60ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3\"" May 13 23:53:55.292738 containerd[1481]: time="2025-05-13T23:53:55.292697295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:53:55.314272 systemd-networkd[1375]: calia9e72bd47eb: Link UP May 13 23:53:55.315504 systemd-networkd[1375]: calia9e72bd47eb: Gained carrier May 13 23:53:55.340267 containerd[1481]: time="2025-05-13T23:53:55.340211928Z" level=info msg="connecting to shim 161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37" address="unix:///run/containerd/s/a0236662ca90b4c6f75d5e29f1677ecb18ccdab8811dbd2c57bee82f1c282c81" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:55.356601 containerd[1481]: 2025-05-13 23:53:54.882 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0 coredns-668d6bf9bc- kube-system 56f97471-bd9f-40e9-998f-a983d4a730bf 693 0 2025-05-13 23:53:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 coredns-668d6bf9bc-426gp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia9e72bd47eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-" May 13 23:53:55.356601 containerd[1481]: 2025-05-13 23:53:54.883 [INFO][4176] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.356601 containerd[1481]: 2025-05-13 23:53:55.004 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" HandleID="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.134 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" HandleID="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"coredns-668d6bf9bc-426gp", "timestamp":"2025-05-13 23:53:55.004088574 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.135 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.183 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.183 [INFO][4210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.191 [INFO][4210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.237 [INFO][4210] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.260 [INFO][4210] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.267 [INFO][4210] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.356880 containerd[1481]: 2025-05-13 23:53:55.272 [INFO][4210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.273 [INFO][4210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.276 [INFO][4210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29 May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.288 [INFO][4210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.300 [INFO][4210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.197/26] block=192.168.28.192/26 handle="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.300 [INFO][4210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.197/26] handle="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.300 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:55.357121 containerd[1481]: 2025-05-13 23:53:55.301 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.197/26] IPv6=[] ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" HandleID="k8s-pod-network.94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.305 [INFO][4176] cni-plugin/k8s.go 386: Populated endpoint ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56f97471-bd9f-40e9-998f-a983d4a730bf", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"coredns-668d6bf9bc-426gp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9e72bd47eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.306 [INFO][4176] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.197/32] ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.306 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9e72bd47eb ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.314 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.318 [INFO][4176] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56f97471-bd9f-40e9-998f-a983d4a730bf", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29", Pod:"coredns-668d6bf9bc-426gp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9e72bd47eb", MAC:"da:a7:4a:4a:70:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:55.357302 containerd[1481]: 2025-05-13 23:53:55.344 [INFO][4176] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" Namespace="kube-system" Pod="coredns-668d6bf9bc-426gp" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--426gp-eth0" May 13 23:53:55.392245 systemd[1]: Started cri-containerd-161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37.scope - libcontainer container 161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37. May 13 23:53:55.406227 containerd[1481]: time="2025-05-13T23:53:55.405541642Z" level=info msg="connecting to shim 94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29" address="unix:///run/containerd/s/3010543a4f4c7fbf7941bf86b076402049b44d25f28129fcebbc69bb999bd5ec" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:55.443896 systemd[1]: Started cri-containerd-94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29.scope - libcontainer container 94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29. May 13 23:53:55.519462 containerd[1481]: time="2025-05-13T23:53:55.519422958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-426gp,Uid:56f97471-bd9f-40e9-998f-a983d4a730bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29\"" May 13 23:53:55.520741 kubelet[2611]: E0513 23:53:55.520490 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:55.535015 containerd[1481]: time="2025-05-13T23:53:55.534975913Z" level=info msg="CreateContainer within sandbox \"94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:53:55.567747 containerd[1481]: time="2025-05-13T23:53:55.561515125Z" level=info msg="Container 186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:55.569703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701873058.mount: Deactivated successfully. May 13 23:53:55.578461 containerd[1481]: time="2025-05-13T23:53:55.578414601Z" level=info msg="CreateContainer within sandbox \"94de0b47e7c53c24789feb9582efcbc10b4fc46d5efa0575a7b1350e2cc91c29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1\"" May 13 23:53:55.581488 containerd[1481]: time="2025-05-13T23:53:55.581454920Z" level=info msg="StartContainer for \"186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1\"" May 13 23:53:55.582359 containerd[1481]: time="2025-05-13T23:53:55.582324986Z" level=info msg="connecting to shim 186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1" address="unix:///run/containerd/s/3010543a4f4c7fbf7941bf86b076402049b44d25f28129fcebbc69bb999bd5ec" protocol=ttrpc version=3 May 13 23:53:55.634598 systemd[1]: Started cri-containerd-186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1.scope - libcontainer container 186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1. May 13 23:53:55.673139 kubelet[2611]: E0513 23:53:55.670880 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:55.673975 containerd[1481]: time="2025-05-13T23:53:55.671523725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nf6lk,Uid:ba90afc3-8e3b-41ca-9eb1-6ce910226ab4,Namespace:kube-system,Attempt:0,}" May 13 23:53:55.763003 containerd[1481]: time="2025-05-13T23:53:55.762961131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9c767d7f-6ns8j,Uid:19b8d6ab-9762-40fa-9455-339aa558cf60,Namespace:calico-system,Attempt:0,} returns sandbox id \"161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37\"" May 13 23:53:55.783992 containerd[1481]: time="2025-05-13T23:53:55.783471756Z" level=info msg="StartContainer for \"186077897747d93102deba94a2604a3246c9c8e02e98a5dadb3a951a1d0f49d1\" returns successfully" May 13 23:53:55.827351 kubelet[2611]: E0513 23:53:55.827320 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:55.835278 kubelet[2611]: I0513 23:53:55.835210 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:55.953897 kubelet[2611]: I0513 23:53:55.953840 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d6f449cf-7kxhq" podStartSLOduration=23.974161406 podStartE2EDuration="25.953818578s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:52.909102812 +0000 UTC m=+34.362812479" lastFinishedPulling="2025-05-13 23:53:54.888759986 +0000 UTC m=+36.342469651" observedRunningTime="2025-05-13 23:53:55.887398043 +0000 UTC m=+37.341107721" watchObservedRunningTime="2025-05-13 23:53:55.953818578 +0000 UTC m=+37.407528253" May 13 23:53:56.113039 systemd-networkd[1375]: cali313fc07e133: Link UP May 13 23:53:56.113972 systemd-networkd[1375]: cali313fc07e133: Gained carrier May 13 23:53:56.129844 kubelet[2611]: I0513 23:53:56.129094 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-426gp" podStartSLOduration=32.12907168 podStartE2EDuration="32.12907168s" podCreationTimestamp="2025-05-13 23:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:55.954484922 +0000 UTC m=+37.408194596" watchObservedRunningTime="2025-05-13 23:53:56.12907168 +0000 UTC m=+37.582781355" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:55.846 [INFO][4442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0 coredns-668d6bf9bc- kube-system ba90afc3-8e3b-41ca-9eb1-6ce910226ab4 686 0 2025-05-13 23:53:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-c1d987daf9 coredns-668d6bf9bc-nf6lk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali313fc07e133 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:55.847 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:55.950 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" HandleID="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.071 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" HandleID="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-c1d987daf9", "pod":"coredns-668d6bf9bc-nf6lk", "timestamp":"2025-05-13 23:53:55.949442322 +0000 UTC"}, Hostname:"ci-4284.0.0-n-c1d987daf9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.071 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.071 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.071 [INFO][4469] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-c1d987daf9' May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.075 [INFO][4469] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.080 [INFO][4469] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.085 [INFO][4469] ipam/ipam.go 489: Trying affinity for 192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.087 [INFO][4469] ipam/ipam.go 155: Attempting to load block cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.089 [INFO][4469] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.28.192/26 host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.089 [INFO][4469] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.28.192/26 handle="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.091 [INFO][4469] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096 May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.096 [INFO][4469] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.28.192/26 handle="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.104 [INFO][4469] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.28.198/26] block=192.168.28.192/26 handle="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.104 [INFO][4469] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.28.198/26] handle="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" host="ci-4284.0.0-n-c1d987daf9" May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.104 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:53:56.158475 containerd[1481]: 2025-05-13 23:53:56.104 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.198/26] IPv6=[] ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" HandleID="k8s-pod-network.6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Workload="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.108 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba90afc3-8e3b-41ca-9eb1-6ce910226ab4", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"", Pod:"coredns-668d6bf9bc-nf6lk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali313fc07e133", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.108 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.28.198/32] ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.108 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali313fc07e133 ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.112 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.116 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba90afc3-8e3b-41ca-9eb1-6ce910226ab4", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-c1d987daf9", ContainerID:"6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096", Pod:"coredns-668d6bf9bc-nf6lk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali313fc07e133", MAC:"42:8b:12:c4:f6:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:53:56.169244 containerd[1481]: 2025-05-13 23:53:56.135 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" Namespace="kube-system" Pod="coredns-668d6bf9bc-nf6lk" WorkloadEndpoint="ci--4284.0.0--n--c1d987daf9-k8s-coredns--668d6bf9bc--nf6lk-eth0" May 13 23:53:56.220343 containerd[1481]: time="2025-05-13T23:53:56.219827937Z" level=info msg="connecting to shim 6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096" address="unix:///run/containerd/s/0a9d4aeac1720689ca646715390be690d6bc3a0be4e17b174dc05e056961d1f5" namespace=k8s.io protocol=ttrpc version=3 May 13 23:53:56.249738 systemd[1]: Started cri-containerd-6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096.scope - libcontainer container 6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096. May 13 23:53:56.327674 containerd[1481]: time="2025-05-13T23:53:56.327636643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nf6lk,Uid:ba90afc3-8e3b-41ca-9eb1-6ce910226ab4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096\"" May 13 23:53:56.329546 kubelet[2611]: E0513 23:53:56.329512 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:56.333136 containerd[1481]: time="2025-05-13T23:53:56.332943119Z" level=info msg="CreateContainer within sandbox \"6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:53:56.413318 containerd[1481]: time="2025-05-13T23:53:56.412678059Z" level=info msg="Container 6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:56.419391 containerd[1481]: time="2025-05-13T23:53:56.419287091Z" level=info msg="CreateContainer within sandbox \"6e9c879734b9cc423fbcf35cef3d5ebcddfa2c33f14669151ea2f5a25332d096\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9\"" May 13 23:53:56.424555 containerd[1481]: time="2025-05-13T23:53:56.424425294Z" level=info msg="StartContainer for \"6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9\"" May 13 23:53:56.428742 containerd[1481]: time="2025-05-13T23:53:56.427053950Z" level=info msg="connecting to shim 6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9" address="unix:///run/containerd/s/0a9d4aeac1720689ca646715390be690d6bc3a0be4e17b174dc05e056961d1f5" protocol=ttrpc version=3 May 13 23:53:56.477772 systemd[1]: Started cri-containerd-6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9.scope - libcontainer container 6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9. May 13 23:53:56.516192 containerd[1481]: time="2025-05-13T23:53:56.516153425Z" level=info msg="StartContainer for \"6cf733d302a355c292aff91327e6caf6c656809bcb548ef01bb78b2345e0d4c9\" returns successfully" May 13 23:53:56.688226 systemd-networkd[1375]: cali717146a607c: Gained IPv6LL May 13 23:53:56.793015 containerd[1481]: time="2025-05-13T23:53:56.792972151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:56.794221 containerd[1481]: time="2025-05-13T23:53:56.794167411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 23:53:56.795466 containerd[1481]: time="2025-05-13T23:53:56.794832364Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:56.796703 containerd[1481]: time="2025-05-13T23:53:56.796680519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:56.798526 containerd[1481]: time="2025-05-13T23:53:56.798501872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.505753787s" May 13 23:53:56.798640 containerd[1481]: time="2025-05-13T23:53:56.798625692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 23:53:56.801729 containerd[1481]: time="2025-05-13T23:53:56.801695002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:53:56.802715 containerd[1481]: time="2025-05-13T23:53:56.802636340Z" level=info msg="CreateContainer within sandbox \"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:53:56.815933 systemd-networkd[1375]: calia9e72bd47eb: Gained IPv6LL May 13 23:53:56.851743 containerd[1481]: time="2025-05-13T23:53:56.850179501Z" level=info msg="Container c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:56.867213 kubelet[2611]: I0513 23:53:56.865529 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:56.867213 kubelet[2611]: E0513 23:53:56.866107 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:56.868386 kubelet[2611]: E0513 23:53:56.866712 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:56.904113 containerd[1481]: time="2025-05-13T23:53:56.903356250Z" level=info msg="CreateContainer within sandbox \"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d\"" May 13 23:53:56.923344 containerd[1481]: time="2025-05-13T23:53:56.923310620Z" level=info msg="StartContainer for \"c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d\"" May 13 23:53:56.927784 containerd[1481]: time="2025-05-13T23:53:56.927696152Z" level=info msg="connecting to shim c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d" address="unix:///run/containerd/s/5746a712fba51ce6fa228e515b191a078a9a0f4163765a7cf7866545e10a3e87" protocol=ttrpc version=3 May 13 23:53:56.937557 kubelet[2611]: I0513 23:53:56.936167 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nf6lk" podStartSLOduration=32.936146058 podStartE2EDuration="32.936146058s" podCreationTimestamp="2025-05-13 23:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:53:56.892314443 +0000 UTC m=+38.346024115" watchObservedRunningTime="2025-05-13 23:53:56.936146058 +0000 UTC m=+38.389855729" May 13 23:53:56.944190 systemd-networkd[1375]: califcd23a0e5c3: Gained IPv6LL May 13 23:53:56.974885 systemd[1]: Started cri-containerd-c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d.scope - libcontainer container c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d. May 13 23:53:57.038917 containerd[1481]: time="2025-05-13T23:53:57.038863222Z" level=info msg="StartContainer for \"c7f8d99fcc460f8731691bed43c6de12f464402896fd7b4539214c7f822b928d\" returns successfully" May 13 23:53:57.328324 systemd-networkd[1375]: cali313fc07e133: Gained IPv6LL May 13 23:53:57.872825 kubelet[2611]: E0513 23:53:57.872519 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:57.872825 kubelet[2611]: E0513 23:53:57.872683 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:58.342738 kubelet[2611]: I0513 23:53:58.342659 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:53:58.343646 kubelet[2611]: E0513 23:53:58.343131 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:58.586906 containerd[1481]: time="2025-05-13T23:53:58.586833246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\" id:\"74e3417d4b609114cdce7ee8c9a7aed8bf59f4374d0ebf331b999edc6f0e15a6\" pid:4629 exited_at:{seconds:1747180438 nanos:579172335}" May 13 23:53:58.738299 containerd[1481]: time="2025-05-13T23:53:58.738185324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\" id:\"290439acdf073e2a1605682dd7177b191553a54b4aa5e8ffcee21853bee15ae8\" pid:4654 exited_at:{seconds:1747180438 nanos:737266082}" May 13 23:53:58.874072 kubelet[2611]: E0513 23:53:58.874043 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:58.876193 kubelet[2611]: E0513 23:53:58.875273 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:58.876193 kubelet[2611]: E0513 23:53:58.875504 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:53:58.978940 containerd[1481]: time="2025-05-13T23:53:58.978615009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:58.979341 containerd[1481]: time="2025-05-13T23:53:58.979293747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 23:53:58.982033 containerd[1481]: time="2025-05-13T23:53:58.981895990Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:58.984786 containerd[1481]: time="2025-05-13T23:53:58.984085463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:58.985972 containerd[1481]: time="2025-05-13T23:53:58.984934697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.182964235s" May 13 23:53:58.985972 containerd[1481]: time="2025-05-13T23:53:58.984968338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 23:53:58.996290 containerd[1481]: time="2025-05-13T23:53:58.996198677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:53:59.021948 containerd[1481]: time="2025-05-13T23:53:59.021831270Z" level=info msg="CreateContainer within sandbox \"161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:53:59.045474 containerd[1481]: time="2025-05-13T23:53:59.045162409Z" level=info msg="Container c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa: CDI devices from CRI Config.CDIDevices: []" May 13 23:53:59.064162 containerd[1481]: time="2025-05-13T23:53:59.064121443Z" level=info msg="CreateContainer within sandbox \"161c874d87472f26c26135e258254634463e63887f9cccdc22594981021a9d37\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\"" May 13 23:53:59.067770 containerd[1481]: time="2025-05-13T23:53:59.064957525Z" level=info msg="StartContainer for \"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\"" May 13 23:53:59.067770 containerd[1481]: time="2025-05-13T23:53:59.066247863Z" level=info msg="connecting to shim c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa" address="unix:///run/containerd/s/a0236662ca90b4c6f75d5e29f1677ecb18ccdab8811dbd2c57bee82f1c282c81" protocol=ttrpc version=3 May 13 23:53:59.092972 systemd[1]: Started cri-containerd-c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa.scope - libcontainer container c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa. May 13 23:53:59.146732 containerd[1481]: time="2025-05-13T23:53:59.146676743Z" level=info msg="StartContainer for \"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\" returns successfully" May 13 23:53:59.891190 kubelet[2611]: I0513 23:53:59.891133 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d9c767d7f-6ns8j" podStartSLOduration=26.675356763 podStartE2EDuration="29.891112862s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:55.773431546 +0000 UTC m=+37.227141213" lastFinishedPulling="2025-05-13 23:53:58.989187658 +0000 UTC m=+40.442897312" observedRunningTime="2025-05-13 23:53:59.88923431 +0000 UTC m=+41.342943974" watchObservedRunningTime="2025-05-13 23:53:59.891112862 +0000 UTC m=+41.344822533" May 13 23:54:00.882486 kubelet[2611]: I0513 23:54:00.882453 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:54:00.975516 systemd[1]: Started sshd@8-137.184.15.248:22-147.75.109.163:53364.service - OpenSSH per-connection server daemon (147.75.109.163:53364). May 13 23:54:01.050434 containerd[1481]: time="2025-05-13T23:54:01.050382831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:01.056742 containerd[1481]: time="2025-05-13T23:54:01.056550267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 23:54:01.057327 containerd[1481]: time="2025-05-13T23:54:01.056932611Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:01.064585 containerd[1481]: time="2025-05-13T23:54:01.064537170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:01.068769 containerd[1481]: time="2025-05-13T23:54:01.067870776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.071221498s" May 13 23:54:01.068769 containerd[1481]: time="2025-05-13T23:54:01.067916282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 23:54:01.080762 containerd[1481]: time="2025-05-13T23:54:01.080394124Z" level=info msg="CreateContainer within sandbox \"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:54:01.092031 containerd[1481]: time="2025-05-13T23:54:01.091994838Z" level=info msg="Container bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:01.101423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979223199.mount: Deactivated successfully. May 13 23:54:01.108192 containerd[1481]: time="2025-05-13T23:54:01.107676372Z" level=info msg="CreateContainer within sandbox \"a7926fc010d66d728c23f4ad22c75ba3af8043715264e4c877e514fdd30313a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915\"" May 13 23:54:01.110693 containerd[1481]: time="2025-05-13T23:54:01.109441886Z" level=info msg="StartContainer for \"bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915\"" May 13 23:54:01.112612 containerd[1481]: time="2025-05-13T23:54:01.112569879Z" level=info msg="connecting to shim bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915" address="unix:///run/containerd/s/5746a712fba51ce6fa228e515b191a078a9a0f4163765a7cf7866545e10a3e87" protocol=ttrpc version=3 May 13 23:54:01.161317 sshd[4713]: Accepted publickey for core from 147.75.109.163 port 53364 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:01.162120 systemd[1]: Started cri-containerd-bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915.scope - libcontainer container bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915. May 13 23:54:01.165653 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:01.184747 systemd-logind[1464]: New session 8 of user core. May 13 23:54:01.191564 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:54:01.306516 containerd[1481]: time="2025-05-13T23:54:01.306474424Z" level=info msg="StartContainer for \"bc6b22730f6ac47368c8c75e3e6f1739cf9b5d8827b42075ffd7a7f0c6fb2915\" returns successfully" May 13 23:54:01.804267 sshd[4736]: Connection closed by 147.75.109.163 port 53364 May 13 23:54:01.806072 sshd-session[4713]: pam_unix(sshd:session): session closed for user core May 13 23:54:01.809920 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. May 13 23:54:01.810674 systemd[1]: sshd@8-137.184.15.248:22-147.75.109.163:53364.service: Deactivated successfully. May 13 23:54:01.814157 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:54:01.817612 systemd-logind[1464]: Removed session 8. May 13 23:54:01.869051 kubelet[2611]: I0513 23:54:01.868991 2611 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:54:01.872891 kubelet[2611]: I0513 23:54:01.872869 2611 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:54:06.819776 systemd[1]: Started sshd@9-137.184.15.248:22-147.75.109.163:53370.service - OpenSSH per-connection server daemon (147.75.109.163:53370). May 13 23:54:06.895341 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 53370 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:06.897402 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:06.903832 systemd-logind[1464]: New session 9 of user core. May 13 23:54:06.916953 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:54:07.088049 sshd[4774]: Connection closed by 147.75.109.163 port 53370 May 13 23:54:07.087906 sshd-session[4772]: pam_unix(sshd:session): session closed for user core May 13 23:54:07.092710 systemd[1]: sshd@9-137.184.15.248:22-147.75.109.163:53370.service: Deactivated successfully. May 13 23:54:07.095803 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:54:07.099037 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. May 13 23:54:07.101837 systemd-logind[1464]: Removed session 9. May 13 23:54:08.704059 kubelet[2611]: I0513 23:54:08.703916 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:54:08.793882 containerd[1481]: time="2025-05-13T23:54:08.793673407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\" id:\"fabd8a43bfa5041d99c7612a30fe7db71e7553ee34f62bff9ec5f43c5c1df7b5\" pid:4799 exited_at:{seconds:1747180448 nanos:793273140}" May 13 23:54:08.812067 kubelet[2611]: I0513 23:54:08.811197 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-txj64" podStartSLOduration=33.030121919 podStartE2EDuration="38.811175885s" podCreationTimestamp="2025-05-13 23:53:30 +0000 UTC" firstStartedPulling="2025-05-13 23:53:55.292248468 +0000 UTC m=+36.745958137" lastFinishedPulling="2025-05-13 23:54:01.073302437 +0000 UTC m=+42.527012103" observedRunningTime="2025-05-13 23:54:01.909450768 +0000 UTC m=+43.363160443" watchObservedRunningTime="2025-05-13 23:54:08.811175885 +0000 UTC m=+50.264885559" May 13 23:54:08.857212 containerd[1481]: time="2025-05-13T23:54:08.857117157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\" id:\"cc1d591299dbe988b98567568d2c6c3d30f9c753026f4975e8dc70d76e1bc363\" pid:4820 exited_at:{seconds:1747180448 nanos:856868390}" May 13 23:54:12.101520 systemd[1]: Started sshd@10-137.184.15.248:22-147.75.109.163:40724.service - OpenSSH per-connection server daemon (147.75.109.163:40724). May 13 23:54:12.183751 sshd[4830]: Accepted publickey for core from 147.75.109.163 port 40724 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:12.184993 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:12.190235 systemd-logind[1464]: New session 10 of user core. May 13 23:54:12.194882 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:54:12.371253 sshd[4832]: Connection closed by 147.75.109.163 port 40724 May 13 23:54:12.372859 sshd-session[4830]: pam_unix(sshd:session): session closed for user core May 13 23:54:12.389688 systemd[1]: sshd@10-137.184.15.248:22-147.75.109.163:40724.service: Deactivated successfully. May 13 23:54:12.392444 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:54:12.393350 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. May 13 23:54:12.396490 systemd[1]: Started sshd@11-137.184.15.248:22-147.75.109.163:40728.service - OpenSSH per-connection server daemon (147.75.109.163:40728). May 13 23:54:12.399194 systemd-logind[1464]: Removed session 10. May 13 23:54:12.471190 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 40728 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:12.475621 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:12.481103 systemd-logind[1464]: New session 11 of user core. May 13 23:54:12.490009 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:54:12.724401 sshd[4848]: Connection closed by 147.75.109.163 port 40728 May 13 23:54:12.726199 sshd-session[4845]: pam_unix(sshd:session): session closed for user core May 13 23:54:12.738561 systemd[1]: sshd@11-137.184.15.248:22-147.75.109.163:40728.service: Deactivated successfully. May 13 23:54:12.742378 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:54:12.744661 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. May 13 23:54:12.753378 systemd[1]: Started sshd@12-137.184.15.248:22-147.75.109.163:40742.service - OpenSSH per-connection server daemon (147.75.109.163:40742). May 13 23:54:12.754973 systemd-logind[1464]: Removed session 11. May 13 23:54:12.820272 sshd[4857]: Accepted publickey for core from 147.75.109.163 port 40742 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:12.822187 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:12.827960 systemd-logind[1464]: New session 12 of user core. May 13 23:54:12.833006 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:54:12.979400 sshd[4860]: Connection closed by 147.75.109.163 port 40742 May 13 23:54:12.980090 sshd-session[4857]: pam_unix(sshd:session): session closed for user core May 13 23:54:12.983752 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. May 13 23:54:12.984542 systemd[1]: sshd@12-137.184.15.248:22-147.75.109.163:40742.service: Deactivated successfully. May 13 23:54:12.988437 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:54:12.990731 systemd-logind[1464]: Removed session 12. May 13 23:54:14.047130 kubelet[2611]: I0513 23:54:14.046863 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:54:17.993808 systemd[1]: Started sshd@13-137.184.15.248:22-147.75.109.163:40758.service - OpenSSH per-connection server daemon (147.75.109.163:40758). May 13 23:54:18.061360 sshd[4885]: Accepted publickey for core from 147.75.109.163 port 40758 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:18.062967 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:18.069757 systemd-logind[1464]: New session 13 of user core. May 13 23:54:18.076928 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:54:18.209215 sshd[4887]: Connection closed by 147.75.109.163 port 40758 May 13 23:54:18.213968 sshd-session[4885]: pam_unix(sshd:session): session closed for user core May 13 23:54:18.218054 systemd[1]: sshd@13-137.184.15.248:22-147.75.109.163:40758.service: Deactivated successfully. May 13 23:54:18.220077 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:54:18.220910 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. May 13 23:54:18.221752 systemd-logind[1464]: Removed session 13. May 13 23:54:23.224780 systemd[1]: Started sshd@14-137.184.15.248:22-147.75.109.163:52622.service - OpenSSH per-connection server daemon (147.75.109.163:52622). May 13 23:54:23.298276 sshd[4900]: Accepted publickey for core from 147.75.109.163 port 52622 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:23.300339 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:23.306219 systemd-logind[1464]: New session 14 of user core. May 13 23:54:23.312879 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:54:23.456436 sshd[4902]: Connection closed by 147.75.109.163 port 52622 May 13 23:54:23.457731 sshd-session[4900]: pam_unix(sshd:session): session closed for user core May 13 23:54:23.462542 systemd[1]: sshd@14-137.184.15.248:22-147.75.109.163:52622.service: Deactivated successfully. May 13 23:54:23.464552 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:54:23.465457 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. May 13 23:54:23.466458 systemd-logind[1464]: Removed session 14. May 13 23:54:23.714189 kubelet[2611]: I0513 23:54:23.713954 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:54:28.472504 systemd[1]: Started sshd@15-137.184.15.248:22-147.75.109.163:39498.service - OpenSSH per-connection server daemon (147.75.109.163:39498). May 13 23:54:28.541818 sshd[4920]: Accepted publickey for core from 147.75.109.163 port 39498 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:28.543184 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:28.548999 systemd-logind[1464]: New session 15 of user core. May 13 23:54:28.554863 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:54:28.720183 containerd[1481]: time="2025-05-13T23:54:28.720144633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\" id:\"1abaa3f20e93166681bafed609c509ce8edb45d0105ce0ceac3af3add8695aa3\" pid:4941 exited_at:{seconds:1747180468 nanos:719843607}" May 13 23:54:28.756900 sshd[4922]: Connection closed by 147.75.109.163 port 39498 May 13 23:54:28.759962 sshd-session[4920]: pam_unix(sshd:session): session closed for user core May 13 23:54:28.768428 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. May 13 23:54:28.768815 systemd[1]: sshd@15-137.184.15.248:22-147.75.109.163:39498.service: Deactivated successfully. May 13 23:54:28.770780 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:54:28.772500 systemd-logind[1464]: Removed session 15. May 13 23:54:33.775153 systemd[1]: Started sshd@16-137.184.15.248:22-147.75.109.163:39514.service - OpenSSH per-connection server daemon (147.75.109.163:39514). May 13 23:54:33.853501 sshd[4959]: Accepted publickey for core from 147.75.109.163 port 39514 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:33.855599 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:33.860760 systemd-logind[1464]: New session 16 of user core. May 13 23:54:33.867908 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:54:34.021524 sshd[4961]: Connection closed by 147.75.109.163 port 39514 May 13 23:54:34.021952 sshd-session[4959]: pam_unix(sshd:session): session closed for user core May 13 23:54:34.034550 systemd[1]: sshd@16-137.184.15.248:22-147.75.109.163:39514.service: Deactivated successfully. May 13 23:54:34.038259 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:54:34.040972 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. May 13 23:54:34.044621 systemd[1]: Started sshd@17-137.184.15.248:22-147.75.109.163:39518.service - OpenSSH per-connection server daemon (147.75.109.163:39518). May 13 23:54:34.046398 systemd-logind[1464]: Removed session 16. May 13 23:54:34.104174 sshd[4977]: Accepted publickey for core from 147.75.109.163 port 39518 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:34.105561 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:34.110398 systemd-logind[1464]: New session 17 of user core. May 13 23:54:34.114892 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:54:34.418306 sshd[4980]: Connection closed by 147.75.109.163 port 39518 May 13 23:54:34.422699 sshd-session[4977]: pam_unix(sshd:session): session closed for user core May 13 23:54:34.431244 systemd[1]: sshd@17-137.184.15.248:22-147.75.109.163:39518.service: Deactivated successfully. May 13 23:54:34.434162 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:54:34.435581 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. May 13 23:54:34.439549 systemd[1]: Started sshd@18-137.184.15.248:22-147.75.109.163:39528.service - OpenSSH per-connection server daemon (147.75.109.163:39528). May 13 23:54:34.444322 systemd-logind[1464]: Removed session 17. May 13 23:54:34.525508 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 39528 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:34.526311 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:34.532881 systemd-logind[1464]: New session 18 of user core. May 13 23:54:34.538946 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:54:35.628980 sshd[4997]: Connection closed by 147.75.109.163 port 39528 May 13 23:54:35.630409 sshd-session[4994]: pam_unix(sshd:session): session closed for user core May 13 23:54:35.642453 systemd[1]: Started sshd@19-137.184.15.248:22-147.75.109.163:39540.service - OpenSSH per-connection server daemon (147.75.109.163:39540). May 13 23:54:35.642954 systemd[1]: sshd@18-137.184.15.248:22-147.75.109.163:39528.service: Deactivated successfully. May 13 23:54:35.648350 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:54:35.651649 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. May 13 23:54:35.653547 systemd-logind[1464]: Removed session 18. May 13 23:54:35.709235 sshd[5011]: Accepted publickey for core from 147.75.109.163 port 39540 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:35.711602 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:35.722663 systemd-logind[1464]: New session 19 of user core. May 13 23:54:35.727873 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:54:36.118781 sshd[5016]: Connection closed by 147.75.109.163 port 39540 May 13 23:54:36.118226 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 13 23:54:36.128820 systemd[1]: sshd@19-137.184.15.248:22-147.75.109.163:39540.service: Deactivated successfully. May 13 23:54:36.131789 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:54:36.134535 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. May 13 23:54:36.137986 systemd[1]: Started sshd@20-137.184.15.248:22-147.75.109.163:39544.service - OpenSSH per-connection server daemon (147.75.109.163:39544). May 13 23:54:36.140971 systemd-logind[1464]: Removed session 19. May 13 23:54:36.198826 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 39544 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:36.200474 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:36.206627 systemd-logind[1464]: New session 20 of user core. May 13 23:54:36.210904 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:54:36.340432 sshd[5030]: Connection closed by 147.75.109.163 port 39544 May 13 23:54:36.341030 sshd-session[5025]: pam_unix(sshd:session): session closed for user core May 13 23:54:36.344409 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. May 13 23:54:36.345161 systemd[1]: sshd@20-137.184.15.248:22-147.75.109.163:39544.service: Deactivated successfully. May 13 23:54:36.347700 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:54:36.350088 systemd-logind[1464]: Removed session 20. May 13 23:54:37.667637 kubelet[2611]: E0513 23:54:37.667577 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:38.846564 containerd[1481]: time="2025-05-13T23:54:38.846508428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\" id:\"dd020b1fa13c263b543ad0dfaa4567daadc7f30f6e87af0389873f850f2dbfbf\" pid:5053 exited_at:{seconds:1747180478 nanos:846260601}" May 13 23:54:40.381051 containerd[1481]: time="2025-05-13T23:54:40.381000034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c15376de9e08a81c5cc7e404c570e93ea7d4be54f74aac74c3e8b792dea86cfa\" id:\"b948a6cd1648307939888b86a81dc5f12a626ac7592cb4b67a71cc3627b6cfd6\" pid:5075 exited_at:{seconds:1747180480 nanos:380082972}" May 13 23:54:41.361520 systemd[1]: Started sshd@21-137.184.15.248:22-147.75.109.163:39020.service - OpenSSH per-connection server daemon (147.75.109.163:39020). May 13 23:54:41.420238 sshd[5085]: Accepted publickey for core from 147.75.109.163 port 39020 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:41.421799 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:41.426705 systemd-logind[1464]: New session 21 of user core. May 13 23:54:41.431877 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:54:41.572829 sshd[5087]: Connection closed by 147.75.109.163 port 39020 May 13 23:54:41.572712 sshd-session[5085]: pam_unix(sshd:session): session closed for user core May 13 23:54:41.578189 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. May 13 23:54:41.578349 systemd[1]: sshd@21-137.184.15.248:22-147.75.109.163:39020.service: Deactivated successfully. May 13 23:54:41.580709 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:54:41.582472 systemd-logind[1464]: Removed session 21. May 13 23:54:44.675324 kubelet[2611]: E0513 23:54:44.675272 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:44.675811 kubelet[2611]: E0513 23:54:44.675578 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:46.588597 systemd[1]: Started sshd@22-137.184.15.248:22-147.75.109.163:39036.service - OpenSSH per-connection server daemon (147.75.109.163:39036). May 13 23:54:46.658094 sshd[5102]: Accepted publickey for core from 147.75.109.163 port 39036 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:46.659753 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:46.665770 systemd-logind[1464]: New session 22 of user core. May 13 23:54:46.671900 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:54:46.833195 sshd[5104]: Connection closed by 147.75.109.163 port 39036 May 13 23:54:46.834144 sshd-session[5102]: pam_unix(sshd:session): session closed for user core May 13 23:54:46.837498 systemd[1]: sshd@22-137.184.15.248:22-147.75.109.163:39036.service: Deactivated successfully. May 13 23:54:46.839554 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:54:46.841326 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. May 13 23:54:46.842363 systemd-logind[1464]: Removed session 22. May 13 23:54:51.848634 systemd[1]: Started sshd@23-137.184.15.248:22-147.75.109.163:40062.service - OpenSSH per-connection server daemon (147.75.109.163:40062). May 13 23:54:51.942664 sshd[5116]: Accepted publickey for core from 147.75.109.163 port 40062 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:51.944822 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:51.949794 systemd-logind[1464]: New session 23 of user core. May 13 23:54:51.957913 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:54:52.153859 sshd[5118]: Connection closed by 147.75.109.163 port 40062 May 13 23:54:52.155501 sshd-session[5116]: pam_unix(sshd:session): session closed for user core May 13 23:54:52.159221 systemd[1]: sshd@23-137.184.15.248:22-147.75.109.163:40062.service: Deactivated successfully. May 13 23:54:52.161143 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:54:52.162030 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. May 13 23:54:52.162958 systemd-logind[1464]: Removed session 23. May 13 23:54:57.171094 systemd[1]: Started sshd@24-137.184.15.248:22-147.75.109.163:40072.service - OpenSSH per-connection server daemon (147.75.109.163:40072). May 13 23:54:57.242351 sshd[5131]: Accepted publickey for core from 147.75.109.163 port 40072 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:57.243896 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:57.249092 systemd-logind[1464]: New session 24 of user core. May 13 23:54:57.253938 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:54:57.393382 sshd[5133]: Connection closed by 147.75.109.163 port 40072 May 13 23:54:57.394218 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 13 23:54:57.398092 systemd[1]: sshd@24-137.184.15.248:22-147.75.109.163:40072.service: Deactivated successfully. May 13 23:54:57.401634 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:54:57.403360 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. May 13 23:54:57.404253 systemd-logind[1464]: Removed session 24. May 13 23:54:58.670522 kubelet[2611]: E0513 23:54:58.670463 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:58.682174 containerd[1481]: time="2025-05-13T23:54:58.682090918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b44e8e08bae5aa72333ecfe4993b6c81d82147bce32b83b2cc3236886c23951\" id:\"6d0af85702c1c7cd08d617735f14b2e877c29f4cc37920111d4981c6c63513a0\" pid:5156 exited_at:{seconds:1747180498 nanos:681770999}"